Skip to content

Commit f07116d

Browse files
urezkitorvalds
authored andcommitted
mm/vmalloc: respect passed gfp_mask when doing preloading
Allocation functions should comply with the given gfp_mask as much as possible. The preallocation code in alloc_vmap_area doesn't follow that pattern and it is using a hardcoded GFP_KERNEL. Although this doesn't really make much difference because vmalloc is not GFP_NOWAIT compliant in general (e.g. page table allocations are GFP_KERNEL) there is no reason to spread that bad habit and it is good to fix the antipattern. [[email protected]: rewrite changelog] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Uladzislau Rezki (Sony) <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Daniel Wagner <[email protected]> Cc: Hillf Danton <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Oleksiy Avramchenko <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 81f1ba5 commit f07116d

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

mm/vmalloc.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1063,17 +1063,17 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
10631063
return ERR_PTR(-EBUSY);
10641064

10651065
might_sleep();
1066+
gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
10661067

1067-
va = kmem_cache_alloc_node(vmap_area_cachep,
1068-
gfp_mask & GFP_RECLAIM_MASK, node);
1068+
va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
10691069
if (unlikely(!va))
10701070
return ERR_PTR(-ENOMEM);
10711071

10721072
/*
10731073
* Only scan the relevant parts containing pointers to other objects
10741074
* to avoid false negatives.
10751075
*/
1076-
kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK);
1076+
kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
10771077

10781078
retry:
10791079
/*
@@ -1099,7 +1099,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
10991099
* Just proceed as it is. If needed "overflow" path
11001100
* will refill the cache we allocate from.
11011101
*/
1102-
pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
1102+
pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
11031103

11041104
spin_lock(&vmap_area_lock);
11051105

0 commit comments

Comments
 (0)