Skip to content

Commit 6bc9b56

Browse files
Naoya Horiguchitorvalds
authored andcommitted
mm: fix race on soft-offlining free huge pages
Patch series "mm: soft-offline: fix race against page allocation". Xishi recently reported the issue about race on reusing the target pages of soft offlining. Discussion and analysis showed that we need make sure that setting PG_hwpoison should be done in the right place under zone->lock for soft offline. 1/2 handles free hugepage's case, and 2/2 hanldes free buddy page's case. This patch (of 2): There's a race condition between soft offline and hugetlb_fault which causes unexpected process killing and/or hugetlb allocation failure. The process killing is caused by the following flow: CPU 0 CPU 1 CPU 2 soft offline get_any_page // find the hugetlb is free mmap a hugetlb file page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page // succeed soft_offline_free_page // set hwpoison flag mmap the hugetlb file page fault ... hugetlb_fault hugetlb_no_page find_lock_page return VM_FAULT_HWPOISON mm_fault_error do_sigbus // kill the process The hugetlb allocation failure comes from the following flow: CPU 0 CPU 1 mmap a hugetlb file // reserve all free page but don't fault-in soft offline get_any_page // find the hugetlb is free soft_offline_free_page // set hwpoison flag dissolve_free_huge_page // fail because all free hugepages are reserved page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page ... dequeue_huge_page_node_exact // ignore hwpoisoned hugepage // and finally fail due to no-mem The root cause of this is that current soft-offline code is written based on an assumption that PageHWPoison flag should be set at first to avoid accessing the corrupted data. This makes sense for memory_failure() or hard offline, but does not for soft offline because soft offline is about corrected (not uncorrected) error and is safe from data lost. This patch changes soft offline semantics where it sets PageHWPoison flag only after containment of the error page completes successfully. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Naoya Horiguchi <[email protected]> Reported-by: Xishi Qiu <[email protected]> Suggested-by: Xishi Qiu <[email protected]> Tested-by: Mike Kravetz <[email protected]> Cc: Michal Hocko <[email protected]> Cc: <[email protected]> Cc: Mike Kravetz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 30aba66 commit 6bc9b56

File tree

3 files changed

+21
-14
lines changed

3 files changed

+21
-14
lines changed

mm/hugetlb.c

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1479,22 +1479,20 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
14791479
/*
14801480
* Dissolve a given free hugepage into free buddy pages. This function does
14811481
* nothing for in-use (including surplus) hugepages. Returns -EBUSY if the
1482-
* number of free hugepages would be reduced below the number of reserved
1483-
* hugepages.
1482+
* dissolution fails because a give page is not a free hugepage, or because
1483+
* free hugepages are fully reserved.
14841484
*/
14851485
int dissolve_free_huge_page(struct page *page)
14861486
{
1487-
int rc = 0;
1487+
int rc = -EBUSY;
14881488

14891489
spin_lock(&hugetlb_lock);
14901490
if (PageHuge(page) && !page_count(page)) {
14911491
struct page *head = compound_head(page);
14921492
struct hstate *h = page_hstate(head);
14931493
int nid = page_to_nid(head);
1494-
if (h->free_huge_pages - h->resv_huge_pages == 0) {
1495-
rc = -EBUSY;
1494+
if (h->free_huge_pages - h->resv_huge_pages == 0)
14961495
goto out;
1497-
}
14981496
/*
14991497
* Move PageHWPoison flag from head page to the raw error page,
15001498
* which makes any subpages rather than the error page reusable.
@@ -1508,6 +1506,7 @@ int dissolve_free_huge_page(struct page *page)
15081506
h->free_huge_pages_node[nid]--;
15091507
h->max_huge_pages--;
15101508
update_and_free_page(h, head);
1509+
rc = 0;
15111510
}
15121511
out:
15131512
spin_unlock(&hugetlb_lock);

mm/memory-failure.c

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1598,8 +1598,18 @@ static int soft_offline_huge_page(struct page *page, int flags)
15981598
if (ret > 0)
15991599
ret = -EIO;
16001600
} else {
1601-
if (PageHuge(page))
1602-
dissolve_free_huge_page(page);
1601+
/*
1602+
* We set PG_hwpoison only when the migration source hugepage
1603+
* was successfully dissolved, because otherwise hwpoisoned
1604+
* hugepage remains on free hugepage list, then userspace will
1605+
* find it as SIGBUS by allocation failure. That's not expected
1606+
* in soft-offlining.
1607+
*/
1608+
ret = dissolve_free_huge_page(page);
1609+
if (!ret) {
1610+
if (set_hwpoison_free_buddy_page(page))
1611+
num_poisoned_pages_inc();
1612+
}
16031613
}
16041614
return ret;
16051615
}
@@ -1715,13 +1725,13 @@ static int soft_offline_in_use_page(struct page *page, int flags)
17151725

17161726
static void soft_offline_free_page(struct page *page)
17171727
{
1728+
int rc = 0;
17181729
struct page *head = compound_head(page);
17191730

1720-
if (!TestSetPageHWPoison(head)) {
1731+
if (PageHuge(head))
1732+
rc = dissolve_free_huge_page(page);
1733+
if (!rc && !TestSetPageHWPoison(page))
17211734
num_poisoned_pages_inc();
1722-
if (PageHuge(head))
1723-
dissolve_free_huge_page(page);
1724-
}
17251735
}
17261736

17271737
/**

mm/migrate.c

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1331,8 +1331,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
13311331
out:
13321332
if (rc != -EAGAIN)
13331333
putback_active_hugepage(hpage);
1334-
if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage))
1335-
num_poisoned_pages_inc();
13361334

13371335
/*
13381336
* If migration was not successful and there's a freeing callback, use

0 commit comments

Comments
 (0)