Skip to content

Commit 072bb0a

Browse files
Mel Gormantorvalds
authored andcommitted
mm: sl[au]b: add knowledge of PFMEMALLOC reserve pages
When a user or administrator requires swap for their application, they create a swap partition and file, format it with mkswap and activate it with swapon. Swap over the network is considered as an option in diskless systems. The two likely scenarios are when blade servers are used as part of a cluster where the form factor or maintenance costs do not allow the use of disks and thin clients. The Linux Terminal Server Project recommends the use of the Network Block Device (NBD) for swap according to the manual at https://sourceforge.net/projects/ltsp/files/Docs-Admin-Guide/LTSPManual.pdf/download There is also documentation and tutorials on how to setup swap over NBD at places like https://help.ubuntu.com/community/UbuntuLTSP/EnableNBDSWAP The nbd-client also documents the use of NBD as swap. Despite this, the fact is that a machine using NBD for swap can deadlock within minutes if swap is used intensively. This patch series addresses the problem. The core issue is that network block devices do not use mempools like normal block devices do. As the host cannot control where they receive packets from, they cannot reliably work out in advance how much memory they might need. Some years ago, Peter Zijlstra developed a series of patches that supported swap over an NFS that at least one distribution is carrying within their kernels. This patch series borrows very heavily from Peter's work to support swapping over NBD as a pre-requisite to supporting swap-over-NFS. The bulk of the complexity is concerned with preserving memory that is allocated from the PFMEMALLOC reserves for use by the network layer which is needed for both NBD and NFS. Patch 1 adds knowledge of the PFMEMALLOC reserves to SLAB and SLUB to preserve access to pages allocated under low memory situations to callers that are freeing memory. Patch 2 optimises the SLUB fast path to avoid pfmemalloc checks Patch 3 introduces __GFP_MEMALLOC to allow access to the PFMEMALLOC reserves without setting PFMEMALLOC. Patch 4 opens the possibility for softirqs to use PFMEMALLOC reserves for later use by network packet processing. Patch 5 only sets page->pfmemalloc when ALLOC_NO_WATERMARKS was required Patch 6 ignores memory policies when ALLOC_NO_WATERMARKS is set. Patches 7-12 allows network processing to use PFMEMALLOC reserves when the socket has been marked as being used by the VM to clean pages. If packets are received and stored in pages that were allocated under low-memory situations and are unrelated to the VM, the packets are dropped. Patch 11 reintroduces __skb_alloc_page which the networking folk may object to but is needed in some cases to propogate pfmemalloc from a newly allocated page to an skb. If there is a strong objection, this patch can be dropped with the impact being that swap-over-network will be slower in some cases but it should not fail. Patch 13 is a micro-optimisation to avoid a function call in the common case. Patch 14 tags NBD sockets as being SOCK_MEMALLOC so they can use PFMEMALLOC if necessary. Patch 15 notes that it is still possible for the PFMEMALLOC reserve to be depleted. To prevent this, direct reclaimers get throttled on a waitqueue if 50% of the PFMEMALLOC reserves are depleted. It is expected that kswapd and the direct reclaimers already running will clean enough pages for the low watermark to be reached and the throttled processes are woken up. Patch 16 adds a statistic to track how often processes get throttled Some basic performance testing was run using kernel builds, netperf on loopback for UDP and TCP, hackbench (pipes and sockets), iozone and sysbench. Each of them were expected to use the sl*b allocators reasonably heavily but there did not appear to be significant performance variances. For testing swap-over-NBD, a machine was booted with 2G of RAM with a swapfile backed by NBD. 8*NUM_CPU processes were started that create anonymous memory mappings and read them linearly in a loop. The total size of the mappings were 4*PHYSICAL_MEMORY to use swap heavily under memory pressure. Without the patches and using SLUB, the machine locks up within minutes and runs to completion with them applied. With SLAB, the story is different as an unpatched kernel run to completion. However, the patched kernel completed the test 45% faster. MICRO 3.5.0-rc2 3.5.0-rc2 vanilla swapnbd Unrecognised test vmscan-anon-mmap-write MMTests Statistics: duration Sys Time Running Test (seconds) 197.80 173.07 User+Sys Time Running Test (seconds) 206.96 182.03 Total Elapsed Time (seconds) 3240.70 1762.09 This patch: mm: sl[au]b: add knowledge of PFMEMALLOC reserve pages Allocations of pages below the min watermark run a risk of the machine hanging due to a lack of memory. To prevent this, only callers who have PF_MEMALLOC or TIF_MEMDIE set and are not processing an interrupt are allowed to allocate with ALLOC_NO_WATERMARKS. Once they are allocated to a slab though, nothing prevents other callers consuming free objects within those slabs. This patch limits access to slab pages that were alloced from the PFMEMALLOC reserves. When this patch is applied, pages allocated from below the low watermark are returned with page->pfmemalloc set and it is up to the caller to determine how the page should be protected. SLAB restricts access to any page with page->pfmemalloc set to callers which are known to able to access the PFMEMALLOC reserve. If one is not available, an attempt is made to allocate a new page rather than use a reserve. SLUB is a bit more relaxed in that it only records if the current per-CPU page was allocated from PFMEMALLOC reserve and uses another partial slab if the caller does not have the necessary GFP or process flags. This was found to be sufficient in tests to avoid hangs due to SLUB generally maintaining smaller lists than SLAB. In low-memory conditions it does mean that !PFMEMALLOC allocators can fail a slab allocation even though free objects are available because they are being preserved for callers that are freeing pages. [[email protected]: Original implementation] [[email protected]: Correct order of page flag clearing] Signed-off-by: Mel Gorman <[email protected]> Cc: David Miller <[email protected]> Cc: Neil Brown <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Mike Christie <[email protected]> Cc: Eric B Munson <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 702d1a6 commit 072bb0a

File tree

6 files changed

+264
-25
lines changed

6 files changed

+264
-25
lines changed

include/linux/mm_types.h

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,15 @@ struct page {
5454
union {
5555
pgoff_t index; /* Our offset within mapping. */
5656
void *freelist; /* slub/slob first free object */
57+
bool pfmemalloc; /* If set by the page allocator,
58+
* ALLOC_PFMEMALLOC was set
59+
* and the low watermark was not
60+
* met implying that the system
61+
* is under some pressure. The
62+
* caller should try ensure
63+
* this page is only used to
64+
* free other pages.
65+
*/
5766
};
5867

5968
union {

include/linux/page-flags.h

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77

88
#include <linux/types.h>
99
#include <linux/bug.h>
10+
#include <linux/mmdebug.h>
1011
#ifndef __GENERATING_BOUNDS_H
1112
#include <linux/mm_types.h>
1213
#include <generated/bounds.h>
@@ -453,6 +454,34 @@ static inline int PageTransTail(struct page *page)
453454
}
454455
#endif
455456

457+
/*
458+
* If network-based swap is enabled, sl*b must keep track of whether pages
459+
* were allocated from pfmemalloc reserves.
460+
*/
461+
static inline int PageSlabPfmemalloc(struct page *page)
462+
{
463+
VM_BUG_ON(!PageSlab(page));
464+
return PageActive(page);
465+
}
466+
467+
static inline void SetPageSlabPfmemalloc(struct page *page)
468+
{
469+
VM_BUG_ON(!PageSlab(page));
470+
SetPageActive(page);
471+
}
472+
473+
static inline void __ClearPageSlabPfmemalloc(struct page *page)
474+
{
475+
VM_BUG_ON(!PageSlab(page));
476+
__ClearPageActive(page);
477+
}
478+
479+
static inline void ClearPageSlabPfmemalloc(struct page *page)
480+
{
481+
VM_BUG_ON(!PageSlab(page));
482+
ClearPageActive(page);
483+
}
484+
456485
#ifdef CONFIG_MMU
457486
#define __PG_MLOCKED (1 << PG_mlocked)
458487
#else

mm/internal.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -279,6 +279,9 @@ static inline struct page *mem_map_next(struct page *iter,
279279
#define __paginginit __init
280280
#endif
281281

282+
/* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */
283+
bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
284+
282285
/* Memory initialisation debug and verification */
283286
enum mminit_level {
284287
MMINIT_WARNING,

mm/page_alloc.c

Lines changed: 22 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1513,6 +1513,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
15131513
#define ALLOC_HARDER 0x10 /* try to alloc harder */
15141514
#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */
15151515
#define ALLOC_CPUSET 0x40 /* check for correct cpuset */
1516+
#define ALLOC_PFMEMALLOC 0x80 /* Caller has PF_MEMALLOC set */
15161517

15171518
#ifdef CONFIG_FAIL_PAGE_ALLOC
15181519

@@ -2293,16 +2294,22 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
22932294
} else if (unlikely(rt_task(current)) && !in_interrupt())
22942295
alloc_flags |= ALLOC_HARDER;
22952296

2296-
if (likely(!(gfp_mask & __GFP_NOMEMALLOC))) {
2297-
if (!in_interrupt() &&
2298-
((current->flags & PF_MEMALLOC) ||
2299-
unlikely(test_thread_flag(TIF_MEMDIE))))
2297+
if ((current->flags & PF_MEMALLOC) ||
2298+
unlikely(test_thread_flag(TIF_MEMDIE))) {
2299+
alloc_flags |= ALLOC_PFMEMALLOC;
2300+
2301+
if (likely(!(gfp_mask & __GFP_NOMEMALLOC)) && !in_interrupt())
23002302
alloc_flags |= ALLOC_NO_WATERMARKS;
23012303
}
23022304

23032305
return alloc_flags;
23042306
}
23052307

2308+
bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
2309+
{
2310+
return !!(gfp_to_alloc_flags(gfp_mask) & ALLOC_PFMEMALLOC);
2311+
}
2312+
23062313
static inline struct page *
23072314
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
23082315
struct zonelist *zonelist, enum zone_type high_zoneidx,
@@ -2490,10 +2497,18 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
24902497
warn_alloc_failed(gfp_mask, order, NULL);
24912498
return page;
24922499
got_pg:
2500+
/*
2501+
* page->pfmemalloc is set when the caller had PFMEMALLOC set or is
2502+
* been OOM killed. The expectation is that the caller is taking
2503+
* steps that will free more memory. The caller should avoid the
2504+
* page being used for !PFMEMALLOC purposes.
2505+
*/
2506+
page->pfmemalloc = !!(alloc_flags & ALLOC_PFMEMALLOC);
2507+
24932508
if (kmemcheck_enabled)
24942509
kmemcheck_pagealloc_alloc(page, order, gfp_mask);
2495-
return page;
24962510

2511+
return page;
24972512
}
24982513

24992514
/*
@@ -2544,6 +2559,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
25442559
page = __alloc_pages_slowpath(gfp_mask, order,
25452560
zonelist, high_zoneidx, nodemask,
25462561
preferred_zone, migratetype);
2562+
else
2563+
page->pfmemalloc = false;
25472564

25482565
trace_mm_page_alloc(page, order, gfp_mask, migratetype);
25492566

0 commit comments

Comments
 (0)