Skip to content

Commit 5091b74

Browse files
Christoph Lametertorvalds
authored andcommitted
mm: slub: optimise the SLUB fast path to avoid pfmemalloc checks
This patch removes the check for pfmemalloc from the alloc hotpath and puts the logic after the election of a new per cpu slab. For a pfmemalloc page we do not use the fast path but force the use of the slow path which is also used for the debug case. This has the side-effect of weakening pfmemalloc processing in the following way; 1. A process that is allocating for network swap calls __slab_alloc. pfmemalloc_match is true so the freelist is loaded and c->freelist is now pointing to a pfmemalloc page. 2. A process that is attempting normal allocations calls slab_alloc, finds the pfmemalloc page on the freelist and uses it because it did not check pfmemalloc_match() The patch allows non-pfmemalloc allocations to use pfmemalloc pages with the kmalloc slabs being the most vunerable caches on the grounds they are most likely to have a mix of pfmemalloc and !pfmemalloc requests. A later patch will still protect the system as processes will get throttled if the pfmemalloc reserves get depleted but performance will not degrade as smoothly. [[email protected]: Expanded changelog] Signed-off-by: Christoph Lameter <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Cc: David Miller <[email protected]> Cc: Neil Brown <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Mike Christie <[email protected]> Cc: Eric B Munson <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 072bb0a commit 5091b74

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

mm/slub.c

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2281,11 +2281,11 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
22812281
}
22822282

22832283
page = c->page;
2284-
if (likely(!kmem_cache_debug(s)))
2284+
if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags)))
22852285
goto load_freelist;
22862286

22872287
/* Only entered in the debug case */
2288-
if (!alloc_debug_processing(s, page, freelist, addr))
2288+
if (kmem_cache_debug(s) && !alloc_debug_processing(s, page, freelist, addr))
22892289
goto new_slab; /* Slab failed checks. Next slab needed */
22902290

22912291
deactivate_slab(s, page, get_freepointer(s, freelist));
@@ -2337,8 +2337,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
23372337

23382338
object = c->freelist;
23392339
page = c->page;
2340-
if (unlikely(!object || !node_match(page, node) ||
2341-
!pfmemalloc_match(page, gfpflags)))
2340+
if (unlikely(!object || !node_match(page, node)))
23422341
object = __slab_alloc(s, gfpflags, node, addr, c);
23432342

23442343
else {

0 commit comments

Comments
 (0)