Skip to content

Commit eb83f65

Browse files
soleenakpm00
authored andcommitted
mm: hugetlb_vmemmap: provide stronger vmemmap allocation guarantees
HugeTLB pages have a struct page optimizations where struct pages for tail pages are freed. However, when HugeTLB pages are destroyed, the memory for struct pages (vmemmap) need to be allocated again. Currently, __GFP_NORETRY flag is used to allocate the memory for vmemmap, but given that this flag makes very little effort to actually reclaim memory the returning of huge pages back to the system can be problem. Lets use __GFP_RETRY_MAYFAIL instead. This flag is also performs graceful reclaim without causing ooms, but at least it may perform a few retries, and will fail only when there is genuinely little amount of unused memory in the system. Freeing a 1G page requires 16M of free memory. A machine might need to be reconfigured from one task to another, and release a large number of 1G pages back to the system if allocating 16M fails, the release won't work. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Pasha Tatashin <[email protected]> Suggested-by: David Rientjes <[email protected]> Reviewed-by: Mike Kravetz <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent bb6e04a commit eb83f65

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

mm/hugetlb_vmemmap.c

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -384,8 +384,9 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
384384
}
385385

386386
static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
387-
gfp_t gfp_mask, struct list_head *list)
387+
struct list_head *list)
388388
{
389+
gfp_t gfp_mask = GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_THISNODE;
389390
unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
390391
int nid = page_to_nid((struct page *)start);
391392
struct page *page, *next;
@@ -413,12 +414,11 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
413414
* @end: end address of the vmemmap virtual address range that we want to
414415
* remap.
415416
* @reuse: reuse address.
416-
* @gfp_mask: GFP flag for allocating vmemmap pages.
417417
*
418418
* Return: %0 on success, negative error code otherwise.
419419
*/
420420
static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
421-
unsigned long reuse, gfp_t gfp_mask)
421+
unsigned long reuse)
422422
{
423423
LIST_HEAD(vmemmap_pages);
424424
struct vmemmap_remap_walk walk = {
@@ -430,7 +430,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
430430
/* See the comment in the vmemmap_remap_free(). */
431431
BUG_ON(start - reuse != PAGE_SIZE);
432432

433-
if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
433+
if (alloc_vmemmap_page_list(start, end, &vmemmap_pages))
434434
return -ENOMEM;
435435

436436
mmap_read_lock(&init_mm);
@@ -476,8 +476,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
476476
* When a HugeTLB page is freed to the buddy allocator, previously
477477
* discarded vmemmap pages must be allocated and remapping.
478478
*/
479-
ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse,
480-
GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
479+
ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse);
481480
if (!ret) {
482481
ClearHPageVmemmapOptimized(head);
483482
static_branch_dec(&hugetlb_optimize_vmemmap_key);

0 commit comments

Comments
 (0)