Skip to content

Commit 3c5c3cf

Browse files
daxtenstorvalds
authored andcommitted
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow memory", v11. Currently, vmalloc space is backed by the early shadow page. This means that kasan is incompatible with VMAP_STACK. This series provides a mechanism to back vmalloc space with real, dynamically allocated memory. I have only wired up x86, because that's the only currently supported arch I can work with easily, but it's very easy to wire up other architectures, and it appears that there is some work-in-progress code to do this on arm64 and s390. This has been discussed before in the context of VMAP_STACK: - https://bugzilla.kernel.org/show_bug.cgi?id=202009 - https://lkml.org/lkml/2018/7/22/198 - https://lkml.org/lkml/2019/7/19/822 In terms of implementation details: Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page when a mapping in vmalloc space uses a particular page of the shadow region. This page can be shared by other vmalloc mappings later on. We hook in to the vmap infrastructure to lazily clean up unused shadow memory. Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that: - Turning on KASAN, inline instrumentation, without vmalloc, introuduces a 4.1x-4.2x slowdown in vmalloc operations. - Turning this on introduces the following slowdowns over KASAN: * ~1.76x slower single-threaded (test_vmalloc.sh performance) * ~2.18x slower when both cpus are performing operations simultaneously (test_vmalloc.sh sequential_test_order=1) This is unfortunate but given that this is a debug feature only, not the end of the world. The benchmarks are also a stress-test for the vmalloc subsystem: they're not indicative of an overall 2x slowdown! This patch (of 4): Hook into vmalloc and vmap, and dynamically allocate real shadow memory to back the mappings. Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page when a mapping in vmalloc space uses a particular page of the shadow region. This page can be shared by other vmalloc mappings later on. We hook in to the vmap infrastructure to lazily clean up unused shadow memory. To avoid the difficulties around swapping mappings around, this code expects that the part of the shadow region that covers the vmalloc space will not be covered by the early shadow page, but will be left unmapped. This will require changes in arch-specific code. This allows KASAN with VMAP_STACK, and may be helpful for architectures that do not have a separate module space (e.g. powerpc64, which I am currently working on). It also allows relaxing the module alignment back to PAGE_SIZE. Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that: - Turning on KASAN, inline instrumentation, without vmalloc, introuduces a 4.1x-4.2x slowdown in vmalloc operations. - Turning this on introduces the following slowdowns over KASAN: * ~1.76x slower single-threaded (test_vmalloc.sh performance) * ~2.18x slower when both cpus are performing operations simultaneously (test_vmalloc.sh sequential_test_order=3D1) This is unfortunate but given that this is a debug feature only, not the end of the world. The full benchmark results are: Performance No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68 full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10 long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89 random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04 fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05 random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75 align_shift_alloc_test 147 830 5.65 5692 38.72 6.86 pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12 Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82 Sequential, 2 cpus No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94 full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02 long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05 random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58 fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50 random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16 align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08 pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43 Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11 fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94 full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03 long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06 random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58 fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49 random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15 align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57 pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10 Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11 [[email protected]: fixups] Link: http://lkml.kernel.org/r/[email protected] Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009 Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mark Rutland <[email protected]> [shadow rework] Signed-off-by: Daniel Axtens <[email protected]> Co-developed-by: Mark Rutland <[email protected]> Acked-by: Vasily Gorbik <[email protected]> Reviewed-by: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Qian Cai <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent e36176b commit 3c5c3cf

File tree

9 files changed

+408
-9
lines changed

9 files changed

+408
-9
lines changed

Documentation/dev-tools/kasan.rst

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -218,3 +218,66 @@ brk handler is used to print bug reports.
218218
A potential expansion of this mode is a hardware tag-based mode, which would
219219
use hardware memory tagging support instead of compiler instrumentation and
220220
manual shadow memory manipulation.
221+
222+
What memory accesses are sanitised by KASAN?
223+
--------------------------------------------
224+
225+
The kernel maps memory in a number of different parts of the address
226+
space. This poses something of a problem for KASAN, which requires
227+
that all addresses accessed by instrumented code have a valid shadow
228+
region.
229+
230+
The range of kernel virtual addresses is large: there is not enough
231+
real memory to support a real shadow region for every address that
232+
could be accessed by the kernel.
233+
234+
By default
235+
~~~~~~~~~~
236+
237+
By default, architectures only map real memory over the shadow region
238+
for the linear mapping (and potentially other small areas). For all
239+
other areas - such as vmalloc and vmemmap space - a single read-only
240+
page is mapped over the shadow area. This read-only shadow page
241+
declares all memory accesses as permitted.
242+
243+
This presents a problem for modules: they do not live in the linear
244+
mapping, but in a dedicated module space. By hooking in to the module
245+
allocator, KASAN can temporarily map real shadow memory to cover
246+
them. This allows detection of invalid accesses to module globals, for
247+
example.
248+
249+
This also creates an incompatibility with ``VMAP_STACK``: if the stack
250+
lives in vmalloc space, it will be shadowed by the read-only page, and
251+
the kernel will fault when trying to set up the shadow data for stack
252+
variables.
253+
254+
CONFIG_KASAN_VMALLOC
255+
~~~~~~~~~~~~~~~~~~~~
256+
257+
With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
258+
cost of greater memory usage. Currently this is only supported on x86.
259+
260+
This works by hooking into vmalloc and vmap, and dynamically
261+
allocating real shadow memory to back the mappings.
262+
263+
Most mappings in vmalloc space are small, requiring less than a full
264+
page of shadow space. Allocating a full shadow page per mapping would
265+
therefore be wasteful. Furthermore, to ensure that different mappings
266+
use different shadow pages, mappings would have to be aligned to
267+
``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``.
268+
269+
Instead, we share backing space across multiple mappings. We allocate
270+
a backing page when a mapping in vmalloc space uses a particular page
271+
of the shadow region. This page can be shared by other vmalloc
272+
mappings later on.
273+
274+
We hook in to the vmap infrastructure to lazily clean up unused shadow
275+
memory.
276+
277+
To avoid the difficulties around swapping mappings around, we expect
278+
that the part of the shadow region that covers the vmalloc space will
279+
not be covered by the early shadow page, but will be left
280+
unmapped. This will require changes in arch-specific code.
281+
282+
This allows ``VMAP_STACK`` support on x86, and can simplify support of
283+
architectures that do not have a fixed module region.

include/linux/kasan.h

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,8 +70,18 @@ struct kasan_cache {
7070
int free_meta_offset;
7171
};
7272

73+
/*
74+
* These functions provide a special case to support backing module
75+
* allocations with real shadow memory. With KASAN vmalloc, the special
76+
* case is unnecessary, as the work is handled in the generic case.
77+
*/
78+
#ifndef CONFIG_KASAN_VMALLOC
7379
int kasan_module_alloc(void *addr, size_t size);
7480
void kasan_free_shadow(const struct vm_struct *vm);
81+
#else
82+
static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
83+
static inline void kasan_free_shadow(const struct vm_struct *vm) {}
84+
#endif
7585

7686
int kasan_add_zero_shadow(void *start, unsigned long size);
7787
void kasan_remove_zero_shadow(void *start, unsigned long size);
@@ -194,4 +204,25 @@ static inline void *kasan_reset_tag(const void *addr)
194204

195205
#endif /* CONFIG_KASAN_SW_TAGS */
196206

207+
#ifdef CONFIG_KASAN_VMALLOC
208+
int kasan_populate_vmalloc(unsigned long requested_size,
209+
struct vm_struct *area);
210+
void kasan_poison_vmalloc(void *start, unsigned long size);
211+
void kasan_release_vmalloc(unsigned long start, unsigned long end,
212+
unsigned long free_region_start,
213+
unsigned long free_region_end);
214+
#else
215+
static inline int kasan_populate_vmalloc(unsigned long requested_size,
216+
struct vm_struct *area)
217+
{
218+
return 0;
219+
}
220+
221+
static inline void kasan_poison_vmalloc(void *start, unsigned long size) {}
222+
static inline void kasan_release_vmalloc(unsigned long start,
223+
unsigned long end,
224+
unsigned long free_region_start,
225+
unsigned long free_region_end) {}
226+
#endif
227+
197228
#endif /* LINUX_KASAN_H */

include/linux/moduleloader.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ void module_arch_cleanup(struct module *mod);
9191
/* Any cleanup before freeing mod->module_init */
9292
void module_arch_freeing_init(struct module *mod);
9393

94-
#ifdef CONFIG_KASAN
94+
#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC)
9595
#include <linux/kasan.h>
9696
#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
9797
#else

include/linux/vmalloc.h

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,18 @@ struct notifier_block; /* in notifier.h */
2222
#define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
2323
#define VM_NO_GUARD 0x00000040 /* don't add guard page */
2424
#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
25+
26+
/*
27+
* VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
28+
*
29+
* If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after
30+
* shadow memory has been mapped. It's used to handle allocation errors so that
31+
* we don't try to poision shadow on free if it was never allocated.
32+
*
33+
* Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to
34+
* determine which allocations need the module shadow freed.
35+
*/
36+
2537
/*
2638
* Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with
2739
* vfree_atomic().

lib/Kconfig.kasan

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,9 @@ config HAVE_ARCH_KASAN
66
config HAVE_ARCH_KASAN_SW_TAGS
77
bool
88

9+
config HAVE_ARCH_KASAN_VMALLOC
10+
bool
11+
912
config CC_HAS_KASAN_GENERIC
1013
def_bool $(cc-option, -fsanitize=kernel-address)
1114

@@ -142,6 +145,19 @@ config KASAN_SW_TAGS_IDENTIFY
142145
(use-after-free or out-of-bounds) at the cost of increased
143146
memory consumption.
144147

148+
config KASAN_VMALLOC
149+
bool "Back mappings in vmalloc space with real shadow memory"
150+
depends on KASAN && HAVE_ARCH_KASAN_VMALLOC
151+
help
152+
By default, the shadow region for vmalloc space is the read-only
153+
zero page. This means that KASAN cannot detect errors involving
154+
vmalloc space.
155+
156+
Enabling this option will hook in to vmap/vmalloc and back those
157+
mappings with real shadow memory allocated on demand. This allows
158+
for KASAN to detect more sorts of errors (and to support vmapped
159+
stacks), but at the cost of higher memory usage.
160+
145161
config TEST_KASAN
146162
tristate "Module for testing KASAN for bug detection"
147163
depends on m && KASAN

0 commit comments

Comments
 (0)