Skip to content

Commit 71725ed

Browse files
Hugh Dickinstorvalds
authored andcommitted
mm: huge tmpfs: try to split_huge_page() when punching hole
Yang Shi writes: Currently, when truncating a shmem file, if the range is partly in a THP (start or end is in the middle of THP), the pages actually will just get cleared rather than being freed, unless the range covers the whole THP. Even though all the subpages are truncated (randomly or sequentially), the THP may still be kept in page cache. This might be fine for some usecases which prefer preserving THP, but balloon inflation is handled in base page size. So when using shmem THP as memory backend, QEMU inflation actually doesn't work as expected since it doesn't free memory. But the inflation usecase really needs to get the memory freed. (Anonymous THP will also not get freed right away, but will be freed eventually when all subpages are unmapped: whereas shmem THP still stays in page cache.) Split THP right away when doing partial hole punch, and if split fails just clear the page so that read of the punched area will return zeroes. Hugh Dickins adds: Our earlier "team of pages" huge tmpfs implementation worked in the way that Yang Shi proposes; and we have been using this patch to continue to split the huge page when hole-punched or truncated, since converting over to the compound page implementation. Although huge tmpfs gives out huge pages when available, if the user specifically asks to truncate or punch a hole (perhaps to free memory, perhaps to reduce the memcg charge), then the filesystem should do so as best it can, splitting the huge page. That is not always possible: any additional reference to the huge page prevents split_huge_page() from succeeding, so the result can be flaky. But in practice it works successfully enough that we've not seen any problem from that. Add shmem_punch_compound() to encapsulate the decision of when a split is needed, and doing the split if so. Using this simplifies the flow in shmem_undo_range(); and the first (trylock) pass does not need to do any page clearing on failure, because the second pass will either succeed or do that clearing. Following the example of zero_user_segment() when clearing a partial page, add flush_dcache_page() and set_page_dirty() when clearing a hole - though I'm not certain that either is needed. But: split_huge_page() would be sure to fail if shmem_undo_range()'s pagevec holds further references to the huge page. The easiest way to fix that is for find_get_entries() to return early, as soon as it has put one compound head or tail into the pagevec. At first this felt like a hack; but on examination, this convention better suits all its callers - or will do, if the slight one-page-per-pagevec slowdown in shmem_unlock_mapping() and shmem_seek_hole_data() is transformed into a 512-page-per-pagevec speedup by checking for compound pages there. Signed-off-by: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Yang Shi <[email protected]> Cc: Alexander Duyck <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Andrea Arcangeli <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
1 parent 343c3d7 commit 71725ed

File tree

3 files changed

+60
-56
lines changed

3 files changed

+60
-56
lines changed

mm/filemap.c

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1693,6 +1693,11 @@ EXPORT_SYMBOL(pagecache_get_page);
16931693
* Any shadow entries of evicted pages, or swap entries from
16941694
* shmem/tmpfs, are included in the returned array.
16951695
*
1696+
* If it finds a Transparent Huge Page, head or tail, find_get_entries()
1697+
* stops at that page: the caller is likely to have a better way to handle
1698+
* the compound page as a whole, and then skip its extent, than repeatedly
1699+
* calling find_get_entries() to return all its tails.
1700+
*
16961701
* Return: the number of pages and shadow entries which were found.
16971702
*/
16981703
unsigned find_get_entries(struct address_space *mapping,
@@ -1724,8 +1729,15 @@ unsigned find_get_entries(struct address_space *mapping,
17241729
/* Has the page moved or been split? */
17251730
if (unlikely(page != xas_reload(&xas)))
17261731
goto put_page;
1727-
page = find_subpage(page, xas.xa_index);
17281732

1733+
/*
1734+
* Terminate early on finding a THP, to allow the caller to
1735+
* handle it all at once; but continue if this is hugetlbfs.
1736+
*/
1737+
if (PageTransHuge(page) && !PageHuge(page)) {
1738+
page = find_subpage(page, xas.xa_index);
1739+
nr_entries = ret + 1;
1740+
}
17291741
export:
17301742
indices[ret] = xas.xa_index;
17311743
entries[ret] = page;

mm/shmem.c

Lines changed: 43 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -788,6 +788,32 @@ void shmem_unlock_mapping(struct address_space *mapping)
788788
}
789789
}
790790

791+
/*
792+
* Check whether a hole-punch or truncation needs to split a huge page,
793+
* returning true if no split was required, or the split has been successful.
794+
*
795+
* Eviction (or truncation to 0 size) should never need to split a huge page;
796+
* but in rare cases might do so, if shmem_undo_range() failed to trylock on
797+
* head, and then succeeded to trylock on tail.
798+
*
799+
* A split can only succeed when there are no additional references on the
800+
* huge page: so the split below relies upon find_get_entries() having stopped
801+
* when it found a subpage of the huge page, without getting further references.
802+
*/
803+
static bool shmem_punch_compound(struct page *page, pgoff_t start, pgoff_t end)
804+
{
805+
if (!PageTransCompound(page))
806+
return true;
807+
808+
/* Just proceed to delete a huge page wholly within the range punched */
809+
if (PageHead(page) &&
810+
page->index >= start && page->index + HPAGE_PMD_NR <= end)
811+
return true;
812+
813+
/* Try to split huge page, so we can truly punch the hole or truncate */
814+
return split_huge_page(page) >= 0;
815+
}
816+
791817
/*
792818
* Remove range of pages and swap entries from page cache, and free them.
793819
* If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate.
@@ -838,31 +864,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
838864
if (!trylock_page(page))
839865
continue;
840866

841-
if (PageTransTail(page)) {
842-
/* Middle of THP: zero out the page */
843-
clear_highpage(page);
844-
unlock_page(page);
845-
continue;
846-
} else if (PageTransHuge(page)) {
847-
if (index == round_down(end, HPAGE_PMD_NR)) {
848-
/*
849-
* Range ends in the middle of THP:
850-
* zero out the page
851-
*/
852-
clear_highpage(page);
853-
unlock_page(page);
854-
continue;
855-
}
856-
index += HPAGE_PMD_NR - 1;
857-
i += HPAGE_PMD_NR - 1;
858-
}
859-
860-
if (!unfalloc || !PageUptodate(page)) {
861-
VM_BUG_ON_PAGE(PageTail(page), page);
862-
if (page_mapping(page) == mapping) {
863-
VM_BUG_ON_PAGE(PageWriteback(page), page);
867+
if ((!unfalloc || !PageUptodate(page)) &&
868+
page_mapping(page) == mapping) {
869+
VM_BUG_ON_PAGE(PageWriteback(page), page);
870+
if (shmem_punch_compound(page, start, end))
864871
truncate_inode_page(mapping, page);
865-
}
866872
}
867873
unlock_page(page);
868874
}
@@ -936,43 +942,25 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
936942

937943
lock_page(page);
938944

939-
if (PageTransTail(page)) {
940-
/* Middle of THP: zero out the page */
941-
clear_highpage(page);
942-
unlock_page(page);
943-
/*
944-
* Partial thp truncate due 'start' in middle
945-
* of THP: don't need to look on these pages
946-
* again on !pvec.nr restart.
947-
*/
948-
if (index != round_down(end, HPAGE_PMD_NR))
949-
start++;
950-
continue;
951-
} else if (PageTransHuge(page)) {
952-
if (index == round_down(end, HPAGE_PMD_NR)) {
953-
/*
954-
* Range ends in the middle of THP:
955-
* zero out the page
956-
*/
957-
clear_highpage(page);
958-
unlock_page(page);
959-
continue;
960-
}
961-
index += HPAGE_PMD_NR - 1;
962-
i += HPAGE_PMD_NR - 1;
963-
}
964-
965945
if (!unfalloc || !PageUptodate(page)) {
966-
VM_BUG_ON_PAGE(PageTail(page), page);
967-
if (page_mapping(page) == mapping) {
968-
VM_BUG_ON_PAGE(PageWriteback(page), page);
969-
truncate_inode_page(mapping, page);
970-
} else {
946+
if (page_mapping(page) != mapping) {
971947
/* Page was replaced by swap: retry */
972948
unlock_page(page);
973949
index--;
974950
break;
975951
}
952+
VM_BUG_ON_PAGE(PageWriteback(page), page);
953+
if (shmem_punch_compound(page, start, end))
954+
truncate_inode_page(mapping, page);
955+
else {
956+
/* Wipe the page and don't get stuck */
957+
clear_highpage(page);
958+
flush_dcache_page(page);
959+
set_page_dirty(page);
960+
if (index <
961+
round_up(start, HPAGE_PMD_NR))
962+
start = index + 1;
963+
}
976964
}
977965
unlock_page(page);
978966
}

mm/swap.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1004,6 +1004,10 @@ void __pagevec_lru_add(struct pagevec *pvec)
10041004
* ascending indexes. There may be holes in the indices due to
10051005
* not-present entries.
10061006
*
1007+
* Only one subpage of a Transparent Huge Page is returned in one call:
1008+
* allowing truncate_inode_pages_range() to evict the whole THP without
1009+
* cycling through a pagevec of extra references.
1010+
*
10071011
* pagevec_lookup_entries() returns the number of entries which were
10081012
* found.
10091013
*/

0 commit comments

Comments
 (0)