Skip to content

Commit 56102c0

Browse files
isilencekuba-moo
authored andcommitted
net: page_pool: add memory provider helpers
Add helpers for memory providers to interact with page pools. net_mp_niov_{set,clear}_page_pool() serve to [dis]associate a net_iov with a page pool. If used, the memory provider is responsible to match "set" calls with "clear" once a net_iov is not going to be used by a page pool anymore, changing a page pool, etc. Acked-by: Jakub Kicinski <[email protected]> Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: David Wei <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
1 parent 69e3953 commit 56102c0

File tree

2 files changed

+47
-0
lines changed

2 files changed

+47
-0
lines changed

include/net/page_pool/memory_provider.h

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,23 @@ struct memory_provider_ops {
1818
void (*uninstall)(void *mp_priv, struct netdev_rx_queue *rxq);
1919
};
2020

21+
bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr);
22+
void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov);
23+
void net_mp_niov_clear_page_pool(struct net_iov *niov);
24+
25+
/**
26+
* net_mp_netmem_place_in_cache() - give a netmem to a page pool
27+
* @pool: the page pool to place the netmem into
28+
* @netmem: netmem to give
29+
*
30+
* Push an accounted netmem into the page pool's allocation cache. The caller
31+
* must ensure that there is space in the cache. It should only be called off
32+
* the mp_ops->alloc_netmems() path.
33+
*/
34+
static inline void net_mp_netmem_place_in_cache(struct page_pool *pool,
35+
netmem_ref netmem)
36+
{
37+
pool->alloc.cache[pool->alloc.count++] = netmem;
38+
}
39+
2140
#endif

net/core/page_pool.c

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1197,3 +1197,31 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
11971197
}
11981198
}
11991199
EXPORT_SYMBOL(page_pool_update_nid);
1200+
1201+
bool net_mp_niov_set_dma_addr(struct net_iov *niov, dma_addr_t addr)
1202+
{
1203+
return page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), addr);
1204+
}
1205+
1206+
/* Associate a niov with a page pool. Should follow with a matching
1207+
* net_mp_niov_clear_page_pool()
1208+
*/
1209+
void net_mp_niov_set_page_pool(struct page_pool *pool, struct net_iov *niov)
1210+
{
1211+
netmem_ref netmem = net_iov_to_netmem(niov);
1212+
1213+
page_pool_set_pp_info(pool, netmem);
1214+
1215+
pool->pages_state_hold_cnt++;
1216+
trace_page_pool_state_hold(pool, netmem, pool->pages_state_hold_cnt);
1217+
}
1218+
1219+
/* Disassociate a niov from a page pool. Should only be used in the
1220+
* ->release_netmem() path.
1221+
*/
1222+
void net_mp_niov_clear_page_pool(struct net_iov *niov)
1223+
{
1224+
netmem_ref netmem = net_iov_to_netmem(niov);
1225+
1226+
page_pool_clear_pp_info(netmem);
1227+
}

0 commit comments

Comments
 (0)