Skip to content

Commit 88aaa2a

Browse files
Naoya Horiguchitorvalds
authored andcommitted
mm: mempolicy: add queue_pages_required()
Patch series "mm: page migration enhancement for thp", v9. Motivations: 1. THP migration becomes important in the upcoming heterogeneous memory systems. As David Nellans from NVIDIA pointed out from other threads (http://www.mail-archive.com/[email protected]/msg1349227.html), future GPUs or other accelerators will have their memory managed by operating systems. Moving data into and out of these memory nodes efficiently is critical to applications that use GPUs or other accelerators. Existing page migration only supports base pages, which has a very low memory bandwidth utilization. My experiments (see below) show THP migration can migrate pages more efficiently. 2. Base page migration vs THP migration throughput. Here are cross-socket page migration results from calling move_pages() syscall: In x86_64, a Intel two-socket E5-2640v3 box, - single 4KB base page migration takes 62.47 us, using 0.06 GB/s BW, - single 2MB THP migration takes 658.54 us, using 2.97 GB/s BW, - 512 4KB base page migration takes 1987.38 us, using 0.98 GB/s BW. In ppc64, a two-socket Power8 box, - single 64KB base page migration takes 49.3 us, using 1.24 GB/s BW, - single 16MB THP migration takes 2202.17 us, using 7.10 GB/s BW, - 256 64KB base page migration takes 2543.65 us, using 6.14 GB/s BW. THP migration can give us 3x and 1.15x throughput over base page migration in x86_64 and ppc64 respectivley. You can test it out by using the code here: https://github.com/x-y-z/thp-migration-bench 3. Existing page migration splits THP before migration and cannot guarantee the migrated pages are still contiguous. Contiguity is always what GPUs and accelerators look for. Without THP migration, khugepaged needs to do extra work to reassemble the migrated pages back to THPs. This patch (of 10): Introduce a separate check routine related to MPOL_MF_INVERT flag. This patch just does cleanup, no behavioral change. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Naoya Horiguchi <[email protected]> Signed-off-by: Zi Yan <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Dave Hansen <[email protected]> Cc: David Nellans <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 015a9e6 commit 88aaa2a

File tree

1 file changed

+17
-5
lines changed

1 file changed

+17
-5
lines changed

mm/mempolicy.c

Lines changed: 17 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -411,6 +411,21 @@ struct queue_pages {
411411
struct vm_area_struct *prev;
412412
};
413413

414+
/*
415+
* Check if the page's nid is in qp->nmask.
416+
*
417+
* If MPOL_MF_INVERT is set in qp->flags, check if the nid is
418+
* in the invert of qp->nmask.
419+
*/
420+
static inline bool queue_pages_required(struct page *page,
421+
struct queue_pages *qp)
422+
{
423+
int nid = page_to_nid(page);
424+
unsigned long flags = qp->flags;
425+
426+
return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT);
427+
}
428+
414429
/*
415430
* Scan through pages checking if pages follow certain conditions,
416431
* and move them to the pagelist if they do.
@@ -464,8 +479,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
464479
*/
465480
if (PageReserved(page))
466481
continue;
467-
nid = page_to_nid(page);
468-
if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT))
482+
if (!queue_pages_required(page, qp))
469483
continue;
470484
if (PageTransCompound(page)) {
471485
get_page(page);
@@ -497,7 +511,6 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
497511
#ifdef CONFIG_HUGETLB_PAGE
498512
struct queue_pages *qp = walk->private;
499513
unsigned long flags = qp->flags;
500-
int nid;
501514
struct page *page;
502515
spinlock_t *ptl;
503516
pte_t entry;
@@ -507,8 +520,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
507520
if (!pte_present(entry))
508521
goto unlock;
509522
page = pte_page(entry);
510-
nid = page_to_nid(page);
511-
if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT))
523+
if (!queue_pages_required(page, qp))
512524
goto unlock;
513525
/* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
514526
if (flags & (MPOL_MF_MOVE_ALL) ||

0 commit comments

Comments
 (0)