Skip to content

Commit 9cf7a11

Browse files
Abel Wutorvalds
authored andcommitted
mm/slub: make add_full() condition more explicit
The commit below is incomplete, as it didn't handle the add_full() part. commit a4d3f89 ("slub: remove useless kmem_cache_debug() before remove_full()") This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(), since that should be the only context in which we need the list_lock for add_full(). Signed-off-by: Abel Wu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Liu Xiang <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
1 parent 9f986d9 commit 9cf7a11

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

mm/slub.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2245,7 +2245,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
22452245
}
22462246
} else {
22472247
m = M_FULL;
2248-
if (kmem_cache_debug(s) && !lock) {
2248+
#ifdef CONFIG_SLUB_DEBUG
2249+
if ((s->flags & SLAB_STORE_USER) && !lock) {
22492250
lock = 1;
22502251
/*
22512252
* This also ensures that the scanning of full
@@ -2254,6 +2255,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
22542255
*/
22552256
spin_lock(&n->list_lock);
22562257
}
2258+
#endif
22572259
}
22582260

22592261
if (l != m) {

0 commit comments

Comments
 (0)