Skip to content

Commit 9364f96

Browse files
jchu314atgithubvijay-suman
authored andcommitted
mm: make page_mapped_in_vma() hugetlb walk aware
When a process consumes a UE in a page, the memory failure handler attempts to collect information for a potential SIGBUS. If the page is an anonymous page, page_mapped_in_vma(page, vma) is invoked in order to 1. retrieve the vaddr from the process' address space, 2. verify that the vaddr is indeed mapped to the poisoned page, where 'page' is the precise small page with UE. It's been observed that when injecting poison to a non-head subpage of an anonymous hugetlb page, no SIGBUS shows up, while injecting to the head page produces a SIGBUS. The cause is that, though hugetlb_walk() returns a valid pmd entry (on x86), but check_pte() detects mismatch between the head page per the pmd and the input subpage. Thus the vaddr is considered not mapped to the subpage and the process is not collected for SIGBUS purpose. This is the calling stack: collect_procs_anon page_mapped_in_vma page_vma_mapped_walk hugetlb_walk huge_pte_lock check_pte check_pte() header says that it "check if [pvmw->pfn, @pvmw->pfn + @pvmw->nr_pages) is mapped at the @pvmw->pte" but practically works only if pvmw->pfn is the head page pfn at pvmw->pte. Hindsight acknowledging that some pvmw->pte could point to a hugepage of some sort such that it makes sense to make check_pte() work for hugepage. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jane Chu <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Kirill A. Shuemov <[email protected]> Cc: linmiaohe <[email protected]> Cc: Matthew Wilcow (Oracle) <[email protected]> Cc: Peter Xu <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> (cherry picked from commit 442b1eca223b4860cc85ef970ae602d125aec5a4) Conflicts: mm/page_vma_mapped.c Conflict due to lack of upstream commits 9651eea ("mm: correct stale comment of function check_pte") 2aff7a4 ("mm: Convert page_vma_mapped_walk to work on PFNs") 8f0b747 ("mm/page_vma_mapped.c: use helper function huge_pte_lock") not backporting them because #1 and #3 are trivial, #2 involves more code than the issue this patch is addressing. The change here in the backport works in the same spirit with minimal impact. Orabug: 37956589 Signed-off-by: Jane Chu <[email protected]> Reviewed-by: William Roche <[email protected]> Signed-off-by: Vijayendra Suman <[email protected]>
1 parent 37f7c26 commit 9364f96

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

mm/page_vma_mapped.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
116116
pfn = pte_pfn(*pvmw->pte);
117117
}
118118

119-
return pfn_is_match(pvmw->page, pfn);
119+
if (unlikely(PageHuge(pvmw->page)))
120+
return pfn_is_match(compound_head(pvmw->page), pfn);
121+
return pfn_is_match((pvmw->page), pfn);
120122
}
121123

122124
static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)

0 commit comments

Comments
 (0)