Skip to content

Commit f19f5c4

Browse files
Sean Christophersontorvalds
authored andcommitted
x86/speculation/l1tf: Exempt zeroed PTEs from inversion
It turns out that we should *not* invert all not-present mappings, because the all zeroes case is obviously special. clear_page() does not undergo the XOR logic to invert the address bits, i.e. PTE, PMD and PUD entries that have not been individually written will have val=0 and so will trigger __pte_needs_invert(). As a result, {pte,pmd,pud}_pfn() will return the wrong PFN value, i.e. all ones (adjusted by the max PFN mask) instead of zero. A zeroed entry is ok because the page at physical address 0 is reserved early in boot specifically to mitigate L1TF, so explicitly exempt them from the inversion when reading the PFN. Manifested as an unexpected mprotect(..., PROT_NONE) failure when called on a VMA that has VM_PFNMAP and was mmap'd to as something other than PROT_NONE but never used. mprotect() sends the PROT_NONE request down prot_none_walk(), which walks the PTEs to check the PFNs. prot_none_pte_entry() gets the bogus PFN from pte_pfn() and returns -EACCES because it thinks mprotect() is trying to adjust a high MMIO address. [ This is a very modified version of Sean's original patch, but all credit goes to Sean for doing this and also pointing out that sometimes the __pte_needs_invert() function only gets the protection bits, not the full eventual pte. But zero remains special even in just protection bits, so that's ok. - Linus ] Fixes: f22cc87 ("x86/speculation/l1tf: Invert all not present mappings") Signed-off-by: Sean Christopherson <[email protected]> Acked-by: Andi Kleen <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent b0e5c29 commit f19f5c4

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

arch/x86/include/asm/pgtable-invert.h

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,18 @@
44

55
#ifndef __ASSEMBLY__
66

7+
/*
8+
* A clear pte value is special, and doesn't get inverted.
9+
*
10+
* Note that even users that only pass a pgprot_t (rather
11+
* than a full pte) won't trigger the special zero case,
12+
* because even PAGE_NONE has _PAGE_PROTNONE | _PAGE_ACCESSED
13+
* set. So the all zero case really is limited to just the
14+
* cleared page table entry case.
15+
*/
716
static inline bool __pte_needs_invert(u64 val)
817
{
9-
return !(val & _PAGE_PRESENT);
18+
return val && !(val & _PAGE_PRESENT);
1019
}
1120

1221
/* Get a mask to xor with the page table entry to get the correct pfn. */

0 commit comments

Comments
 (0)