Skip to content

Commit 10c75cc

Browse files
committed
RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()
rdma_for_each_block() makes assumptions about how the SGL is constructed that don't work if the block size is below the page size used to to build the SGL. The rules for umem SGL construction require that the SG's all be PAGE_SIZE aligned and we don't encode the actual byte offset of the VA range inside the SGL using offset and length. So rdma_for_each_block() has no idea where the actual starting/ending point is to compute the first/last block boundary if the starting address should be within a SGL. Fixing the SGL construction turns out to be really hard, and will be the subject of other patches. For now block smaller pages. Fixes: 4a35339 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Leon Romanovsky <[email protected]> Reviewed-by: Shiraz Saleem <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
1 parent a40c20d commit 10c75cc

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

drivers/infiniband/core/umem.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,12 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
151151
dma_addr_t mask;
152152
int i;
153153

154+
/* rdma_for_each_block() has a bug if the page size is smaller than the
155+
* page size used to build the umem. For now prevent smaller page sizes
156+
* from being returned.
157+
*/
158+
pgsz_bitmap &= GENMASK(BITS_PER_LONG - 1, PAGE_SHIFT);
159+
154160
/* At minimum, drivers must support PAGE_SIZE or smaller */
155161
if (WARN_ON(!(pgsz_bitmap & GENMASK(PAGE_SHIFT, 0))))
156162
return 0;

0 commit comments

Comments
 (0)