Skip to content

Commit 6401a2e

Browse files
davidhildenbrandakpm00
authored andcommitted
fs/proc/task_mmu: convert smaps_hugetlb_range() to work on folios
Let's get rid of another page_mapcount() check and simply use folio_likely_mapped_shared(), which is precise for hugetlb folios. While at it, use huge_ptep_get() + pte_page() instead of ptep_get() + vm_normal_page(), just like we do in pagemap_hugetlb_range(). No functional change intended. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Reviewed-by: Oscar Salvador <[email protected]> Cc: Muchun Song <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 88e4e47 commit 6401a2e

File tree

1 file changed

+7
-6
lines changed

1 file changed

+7
-6
lines changed

fs/proc/task_mmu.c

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -730,19 +730,20 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
730730
{
731731
struct mem_size_stats *mss = walk->private;
732732
struct vm_area_struct *vma = walk->vma;
733-
struct page *page = NULL;
734-
pte_t ptent = ptep_get(pte);
733+
pte_t ptent = huge_ptep_get(pte);
734+
struct folio *folio = NULL;
735735

736736
if (pte_present(ptent)) {
737-
page = vm_normal_page(vma, addr, ptent);
737+
folio = page_folio(pte_page(ptent));
738738
} else if (is_swap_pte(ptent)) {
739739
swp_entry_t swpent = pte_to_swp_entry(ptent);
740740

741741
if (is_pfn_swap_entry(swpent))
742-
page = pfn_swap_entry_to_page(swpent);
742+
folio = pfn_swap_entry_folio(swpent);
743743
}
744-
if (page) {
745-
if (page_mapcount(page) >= 2 || hugetlb_pmd_shared(pte))
744+
if (folio) {
745+
if (folio_likely_mapped_shared(folio) ||
746+
hugetlb_pmd_shared(pte))
746747
mss->shared_hugetlb += huge_page_size(hstate_vma(vma));
747748
else
748749
mss->private_hugetlb += huge_page_size(hstate_vma(vma));

0 commit comments

Comments
 (0)