Skip to content

Commit 8037195

Browse files
kiryltorvalds
authored andcommitted
thp: setup huge zero page on non-write page fault
All code paths seems covered. Now we can map huge zero page on read page fault. We setup it in do_huge_pmd_anonymous_page() if area around fault address is suitable for THP and we've got read page fault. If we fail to setup huge zero page (ENOMEM) we fallback to handle_pte_fault() as we normally do in THP. Signed-off-by: Kirill A. Shutemov <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Mel Gorman <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent c5a647d commit 8037195

File tree

1 file changed

+10
-0
lines changed

1 file changed

+10
-0
lines changed

mm/huge_memory.c

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -733,6 +733,16 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
733733
return VM_FAULT_OOM;
734734
if (unlikely(khugepaged_enter(vma)))
735735
return VM_FAULT_OOM;
736+
if (!(flags & FAULT_FLAG_WRITE)) {
737+
pgtable_t pgtable;
738+
pgtable = pte_alloc_one(mm, haddr);
739+
if (unlikely(!pgtable))
740+
return VM_FAULT_OOM;
741+
spin_lock(&mm->page_table_lock);
742+
set_huge_zero_page(pgtable, mm, vma, haddr, pmd);
743+
spin_unlock(&mm->page_table_lock);
744+
return 0;
745+
}
736746
page = alloc_hugepage_vma(transparent_hugepage_defrag(vma),
737747
vma, haddr, numa_node_id(), 0);
738748
if (unlikely(!page)) {

0 commit comments

Comments
 (0)