Skip to content

Commit 98ea025

Browse files
pizhenweiMatthew Wilcox (Oracle)
authored andcommitted
mm/rmap: Fix handling of hugetlbfs pages in page_vma_mapped_walk
page_mapped_in_vma() sets nr_pages to 1, which is usually correct as we only want to know about the precise page and not about other pages in the folio. However, hugetlbfs does want to know about the entire hpage, and using nr_pages to get the size of the hpage is wrong. We could change page_mapped_in_vma() to special-case hugetlbfs pages, but it's better to ignore nr_pages in page_vma_mapped_walk() and get the size from the VMA instead. Fixes: 2aff7a4 ("mm: Convert page_vma_mapped_walk to work on PFNs") Signed-off-by: zhenwei pi <[email protected]> Reviewed-by: Muchun Song <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> [edit commit message, use hstate directly]
1 parent ec4858e commit 98ea025

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

mm/page_vma_mapped.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
163163
return not_found(pvmw);
164164

165165
if (unlikely(is_vm_hugetlb_page(vma))) {
166-
unsigned long size = pvmw->nr_pages * PAGE_SIZE;
166+
struct hstate *hstate = hstate_vma(vma);
167+
unsigned long size = huge_page_size(hstate);
167168
/* The only possible mapping was handled on last iteration */
168169
if (pvmw->pte)
169170
return not_found(pvmw);
@@ -173,8 +174,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
173174
if (!pvmw->pte)
174175
return false;
175176

176-
pvmw->ptl = huge_pte_lockptr(size_to_hstate(size), mm,
177-
pvmw->pte);
177+
pvmw->ptl = huge_pte_lockptr(hstate, mm, pvmw->pte);
178178
spin_lock(pvmw->ptl);
179179
if (!check_pte(pvmw))
180180
return not_found(pvmw);

0 commit comments

Comments
 (0)