Skip to content

Commit ed18adc

Browse files
committed
mm: SLUB hardened usercopy support
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the SLUB allocator to catch any copies that may span objects. Includes a redzone handling fix discovered by Michael Ellerman. Based on code from PaX and grsecurity. Signed-off-by: Kees Cook <[email protected]> Tested-by: Michael Ellerman <[email protected]> Reviwed-by: Laura Abbott <[email protected]>
1 parent 04385fc commit ed18adc

File tree

2 files changed

+41
-0
lines changed

2 files changed

+41
-0
lines changed

init/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1766,6 +1766,7 @@ config SLAB
17661766

17671767
config SLUB
17681768
bool "SLUB (Unqueued Allocator)"
1769+
select HAVE_HARDENED_USERCOPY_ALLOCATOR
17691770
help
17701771
SLUB is a slab allocator that minimizes cache line usage
17711772
instead of managing queues of cached objects (SLAB approach).

mm/slub.c

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3614,6 +3614,46 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
36143614
EXPORT_SYMBOL(__kmalloc_node);
36153615
#endif
36163616

3617+
#ifdef CONFIG_HARDENED_USERCOPY
3618+
/*
3619+
* Rejects objects that are incorrectly sized.
3620+
*
3621+
* Returns NULL if check passes, otherwise const char * to name of cache
3622+
* to indicate an error.
3623+
*/
3624+
const char *__check_heap_object(const void *ptr, unsigned long n,
3625+
struct page *page)
3626+
{
3627+
struct kmem_cache *s;
3628+
unsigned long offset;
3629+
size_t object_size;
3630+
3631+
/* Find object and usable object size. */
3632+
s = page->slab_cache;
3633+
object_size = slab_ksize(s);
3634+
3635+
/* Reject impossible pointers. */
3636+
if (ptr < page_address(page))
3637+
return s->name;
3638+
3639+
/* Find offset within object. */
3640+
offset = (ptr - page_address(page)) % s->size;
3641+
3642+
/* Adjust for redzone and reject if within the redzone. */
3643+
if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
3644+
if (offset < s->red_left_pad)
3645+
return s->name;
3646+
offset -= s->red_left_pad;
3647+
}
3648+
3649+
/* Allow address range falling entirely within object size. */
3650+
if (offset <= object_size && n <= object_size - offset)
3651+
return NULL;
3652+
3653+
return s->name;
3654+
}
3655+
#endif /* CONFIG_HARDENED_USERCOPY */
3656+
36173657
static size_t __ksize(const void *object)
36183658
{
36193659
struct page *page;

0 commit comments

Comments
 (0)