Skip to content

Commit 1dddd25

Browse files
committed
x86/kaslr: Fix the vaddr_end mess
vaddr_end for KASLR is only documented in the KASLR code itself and is adjusted depending on config options. So it's not surprising that a change of the memory layout causes KASLR to have the wrong vaddr_end. This can map arbitrary stuff into other areas causing hard to understand problems. Remove the whole ifdef magic and define the start of the cpu_entry_area to be the end of the KASLR vaddr range. Add documentation to that effect. Fixes: 92a0f81 ("x86/cpu_entry_area: Move it out of the fixmap") Reported-by: Benjamin Gilbert <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Benjamin Gilbert <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: stable <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Garnier <[email protected]>, Cc: Alexander Kuleshov <[email protected]> Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
1 parent f207890 commit 1dddd25

File tree

3 files changed

+22
-24
lines changed

3 files changed

+22
-24
lines changed

Documentation/x86/x86_64/mm.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
1212
... unused hole ...
1313
ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
1414
... unused hole ...
15+
vaddr_end for KASLR
1516
fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
1617
fffffe8000000000 - fffffeffffffffff (=39 bits) LDT remap for PTI
1718
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
@@ -37,6 +38,7 @@ ffd4000000000000 - ffd5ffffffffffff (=49 bits) virtual memory map (512TB)
3738
... unused hole ...
3839
ffdf000000000000 - fffffc0000000000 (=53 bits) kasan shadow memory (8PB)
3940
... unused hole ...
41+
vaddr_end for KASLR
4042
fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
4143
... unused hole ...
4244
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
@@ -71,3 +73,7 @@ during EFI runtime calls.
7173
Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
7274
physical memory, vmalloc/ioremap space and virtual memory map are randomized.
7375
Their order is preserved but their base will be offset early at boot time.
76+
77+
Be very careful vs. KASLR when changing anything here. The KASLR address
78+
range must not overlap with anything except the KASAN shadow area, which is
79+
correct as KASAN disables KASLR.

arch/x86/include/asm/pgtable_64_types.h

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,13 @@ typedef struct { pteval_t pte; } pte_t;
7575
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
7676
#define PGDIR_MASK (~(PGDIR_SIZE - 1))
7777

78-
/* See Documentation/x86/x86_64/mm.txt for a description of the memory map. */
78+
/*
79+
* See Documentation/x86/x86_64/mm.txt for a description of the memory map.
80+
*
81+
* Be very careful vs. KASLR when changing anything here. The KASLR address
82+
* range must not overlap with anything except the KASAN shadow area, which
83+
* is correct as KASAN disables KASLR.
84+
*/
7985
#define MAXMEM _AC(__AC(1, UL) << MAX_PHYSMEM_BITS, UL)
8086

8187
#ifdef CONFIG_X86_5LEVEL

arch/x86/mm/kaslr.c

Lines changed: 9 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -34,25 +34,14 @@
3434
#define TB_SHIFT 40
3535

3636
/*
37-
* Virtual address start and end range for randomization. The end changes base
38-
* on configuration to have the highest amount of space for randomization.
39-
* It increases the possible random position for each randomized region.
37+
* Virtual address start and end range for randomization.
4038
*
41-
* You need to add an if/def entry if you introduce a new memory region
42-
* compatible with KASLR. Your entry must be in logical order with memory
43-
* layout. For example, ESPFIX is before EFI because its virtual address is
44-
* before. You also need to add a BUILD_BUG_ON() in kernel_randomize_memory() to
45-
* ensure that this order is correct and won't be changed.
39+
* The end address could depend on more configuration options to make the
40+
* highest amount of space for randomization available, but that's too hard
41+
* to keep straight and caused issues already.
4642
*/
4743
static const unsigned long vaddr_start = __PAGE_OFFSET_BASE;
48-
49-
#if defined(CONFIG_X86_ESPFIX64)
50-
static const unsigned long vaddr_end = ESPFIX_BASE_ADDR;
51-
#elif defined(CONFIG_EFI)
52-
static const unsigned long vaddr_end = EFI_VA_END;
53-
#else
54-
static const unsigned long vaddr_end = __START_KERNEL_map;
55-
#endif
44+
static const unsigned long vaddr_end = CPU_ENTRY_AREA_BASE;
5645

5746
/* Default values */
5847
unsigned long page_offset_base = __PAGE_OFFSET_BASE;
@@ -101,15 +90,12 @@ void __init kernel_randomize_memory(void)
10190
unsigned long remain_entropy;
10291

10392
/*
104-
* All these BUILD_BUG_ON checks ensures the memory layout is
105-
* consistent with the vaddr_start/vaddr_end variables.
93+
* These BUILD_BUG_ON checks ensure the memory layout is consistent
94+
* with the vaddr_start/vaddr_end variables. These checks are very
95+
* limited....
10696
*/
10797
BUILD_BUG_ON(vaddr_start >= vaddr_end);
108-
BUILD_BUG_ON(IS_ENABLED(CONFIG_X86_ESPFIX64) &&
109-
vaddr_end >= EFI_VA_END);
110-
BUILD_BUG_ON((IS_ENABLED(CONFIG_X86_ESPFIX64) ||
111-
IS_ENABLED(CONFIG_EFI)) &&
112-
vaddr_end >= __START_KERNEL_map);
98+
BUILD_BUG_ON(vaddr_end != CPU_ENTRY_AREA_BASE);
11399
BUILD_BUG_ON(vaddr_end > __START_KERNEL_map);
114100

115101
if (!kaslr_memory_enabled())

0 commit comments

Comments
 (0)