Skip to content

Commit 5227cfa

Browse files
mrutland-armctmarinas
authored andcommitted
arm64: mm: place empty_zero_page in bss
Currently the zero page is set up in paging_init, and thus we cannot use the zero page earlier. We use the zero page as a reserved TTBR value from which no TLB entries may be allocated (e.g. when uninstalling the idmap). To enable such usage earlier (as may be required for invasive changes to the kernel page tables), and to minimise the time that the idmap is active, we need to be able to use the zero page before paging_init. This patch follows the example set by x86, by allocating the zero page at compile time, in .bss. This means that the zero page itself is available immediately upon entry to start_kernel (as we zero .bss before this), and also means that the zero page takes up no space in the raw Image binary. The associated struct page is allocated in bootmem_init, and remains unavailable until this time. Outside of arch code, the only users of empty_zero_page assume that the empty_zero_page symbol refers to the zeroed memory itself, and that ZERO_PAGE(x) must be used to acquire the associated struct page, following the example of x86. This patch also brings arm64 inline with these assumptions. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Jeremy Linton <[email protected]> Cc: Laura Abbott <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
1 parent 21ab99c commit 5227cfa

File tree

4 files changed

+5
-11
lines changed

4 files changed

+5
-11
lines changed

arch/arm64/include/asm/mmu_context.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
4848
*/
4949
static inline void cpu_set_reserved_ttbr0(void)
5050
{
51-
unsigned long ttbr = page_to_phys(empty_zero_page);
51+
unsigned long ttbr = virt_to_phys(empty_zero_page);
5252

5353
asm(
5454
" msr ttbr0_el1, %0 // set TTBR0\n"

arch/arm64/include/asm/pgtable.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -121,8 +121,8 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
121121
* ZERO_PAGE is a global shared page that is always zero: used
122122
* for zero-mapped memory areas etc..
123123
*/
124-
extern struct page *empty_zero_page;
125-
#define ZERO_PAGE(vaddr) (empty_zero_page)
124+
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
125+
#define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page)
126126

127127
#define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte))
128128

arch/arm64/kernel/head.S

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -421,6 +421,7 @@ __mmap_switched:
421421
adr_l x2, __bss_stop
422422
sub x2, x2, x0
423423
bl __pi_memset
424+
dsb ishst // Make zero page visible to PTW
424425

425426
adr_l sp, initial_sp, x4
426427
mov x4, sp

arch/arm64/mm/mmu.c

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
4949
* Empty_zero_page is a special page that is used for zero-initialized data
5050
* and COW.
5151
*/
52-
struct page *empty_zero_page;
52+
unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
5353
EXPORT_SYMBOL(empty_zero_page);
5454

5555
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
@@ -459,18 +459,11 @@ void fixup_init(void)
459459
*/
460460
void __init paging_init(void)
461461
{
462-
void *zero_page;
463-
464462
map_mem();
465463
fixup_executable();
466464

467-
/* allocate the zero page. */
468-
zero_page = early_pgtable_alloc();
469-
470465
bootmem_init();
471466

472-
empty_zero_page = virt_to_page(zero_page);
473-
474467
/*
475468
* TTBR0 is only used for the identity mapping at this stage. Make it
476469
* point to zero page to avoid speculatively fetching new entries.

0 commit comments

Comments
 (0)