Skip to content

Commit 29b006a

Browse files
netoptimizerdavem330
authored andcommitted
mlx5: more strict use of page_pool API
The mlx5 driver is using page_pool, but not for DMA-mapping (currently), and is a little too relaxed about returning or releasing page resources, as it is not strictly necessary, when not using DMA-mappings. As this patchset is working towards tracking page_pool resources, to know about in-flight frames on shutdown. Then fix places where mlx5 leak page_pool resource. In case of dma_mapping_error, then recycle into page_pool. In mlx5e_free_rq() moved the page_pool_destroy() call to after the mlx5e_page_release() calls, as it is more correct. In mlx5e_page_release() when no recycle was requested, then release page from the page_pool, via page_pool_release_page(). Signed-off-by: Jesper Dangaard Brouer <[email protected]> Reviewed-by: Tariq Toukan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent e54cfd7 commit 29b006a

File tree

2 files changed

+7
-5
lines changed

2 files changed

+7
-5
lines changed

drivers/net/ethernet/mellanox/mlx5/core/en_main.c

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -625,10 +625,6 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq)
625625
if (rq->xdp_prog)
626626
bpf_prog_put(rq->xdp_prog);
627627

628-
xdp_rxq_info_unreg(&rq->xdp_rxq);
629-
if (rq->page_pool)
630-
page_pool_destroy(rq->page_pool);
631-
632628
switch (rq->wq_type) {
633629
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
634630
kvfree(rq->mpwqe.info);
@@ -645,6 +641,11 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq)
645641

646642
mlx5e_page_release(rq, dma_info, false);
647643
}
644+
645+
xdp_rxq_info_unreg(&rq->xdp_rxq);
646+
if (rq->page_pool)
647+
page_pool_destroy(rq->page_pool);
648+
648649
mlx5_wq_destroy(&rq->wq_ctrl);
649650
}
650651

drivers/net/ethernet/mellanox/mlx5/core/en_rx.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq *rq,
248248
dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0,
249249
PAGE_SIZE, rq->buff.map_dir);
250250
if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
251-
put_page(dma_info->page);
251+
page_pool_recycle_direct(rq->page_pool, dma_info->page);
252252
dma_info->page = NULL;
253253
return -ENOMEM;
254254
}
@@ -272,6 +272,7 @@ void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info,
272272
page_pool_recycle_direct(rq->page_pool, dma_info->page);
273273
} else {
274274
mlx5e_page_dma_unmap(rq, dma_info);
275+
page_pool_release_page(rq->page_pool, dma_info->page);
275276
put_page(dma_info->page);
276277
}
277278
}

0 commit comments

Comments
 (0)