Skip to content

Commit 28f836b

Browse files
gormanmtorvalds
authored andcommitted
mm/page_alloc: split per cpu page lists and zone stats
The PCP (per-cpu page allocator in page_alloc.c) shares locking requirements with vmstat and the zone lock which is inconvenient and causes some issues. For example, the PCP list and vmstat share the same per-cpu space meaning that it's possible that vmstat updates dirty cache lines holding per-cpu lists across CPUs unless padding is used. Second, PREEMPT_RT does not want to disable IRQs for too long in the page allocator. This series splits the locking requirements and uses locks types more suitable for PREEMPT_RT, reduces the time when special locking is required for stats and reduces the time when IRQs need to be disabled on !PREEMPT_RT kernels. Why local_lock? PREEMPT_RT considers the following sequence to be unsafe as documented in Documentation/locking/locktypes.rst local_irq_disable(); spin_lock(&lock); The pcp allocator has this sequence for rmqueue_pcplist (local_irq_save) -> __rmqueue_pcplist -> rmqueue_bulk (spin_lock). While it's possible to separate this out, it generally means there are points where we enable IRQs and reenable them again immediately. To prevent a migration and the per-cpu pointer going stale, migrate_disable is also needed. That is a custom lock that is similar, but worse, than local_lock. Furthermore, on PREEMPT_RT, it's undesirable to leave IRQs disabled for too long. By converting to local_lock which disables migration on PREEMPT_RT, the locking requirements can be separated and start moving the protections for PCP, stats and the zone lock to PREEMPT_RT-safe equivalent locking. As a bonus, local_lock also means that PROVE_LOCKING does something useful. After that, it's obvious that zone_statistics incurs too much overhead and leaves IRQs disabled for longer than necessary on !PREEMPT_RT kernels. zone_statistics uses perfectly accurate counters requiring IRQs be disabled for parallel RMW sequences when inaccurate ones like vm_events would do. The series makes the NUMA statistics (NUMA_HIT and friends) inaccurate counters that then require no special protection on !PREEMPT_RT. The bulk page allocator can then do stat updates in bulk with IRQs enabled which should improve the efficiency. Technically, this could have been done without the local_lock and vmstat conversion work and the order simply reflects the timing of when different series were implemented. Finally, there are places where we conflate IRQs being disabled for the PCP with the IRQ-safe zone spinlock. The remainder of the series reduces the scope of what is protected by disabled IRQs on !PREEMPT_RT kernels. By the end of the series, page_alloc.c does not call local_irq_save so the locking scope is a bit clearer. The one exception is that modifying NR_FREE_PAGES still happens in places where it's known the IRQs are disabled as it's harmless for PREEMPT_RT and would be expensive to split the locking there. No performance data is included because despite the overhead of the stats, it's within the noise for most workloads on !PREEMPT_RT. However, Jesper Dangaard Brouer ran a page allocation microbenchmark on a E5-1650 v4 @ 3.60GHz CPU on the first version of this series. Focusing on the array variant of the bulk page allocator reveals the following. (CPU: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz) ARRAY variant: time_bulk_page_alloc_free_array: step=bulk size Baseline Patched 1 56.383 54.225 (+3.83%) 2 40.047 35.492 (+11.38%) 3 37.339 32.643 (+12.58%) 4 35.578 30.992 (+12.89%) 8 33.592 29.606 (+11.87%) 16 32.362 28.532 (+11.85%) 32 31.476 27.728 (+11.91%) 64 30.633 27.252 (+11.04%) 128 30.596 27.090 (+11.46%) While this is a positive outcome, the series is more likely to be interesting to the RT people in terms of getting parts of the PREEMPT_RT tree into mainline. This patch (of 9): The per-cpu page allocator lists and the per-cpu vmstat deltas are stored in the same struct per_cpu_pages even though vmstats have no direct impact on the per-cpu page lists. This is inconsistent because the vmstats for a node are stored on a dedicated structure. The bigger issue is that the per_cpu_pages structure is not cache-aligned and stat updates either cache conflict with adjacent per-cpu lists incurring a runtime cost or padding is required incurring a memory cost. This patch splits the per-cpu pagelists and the vmstat deltas into separate structures. It's mostly a mechanical conversion but some variable renaming is done to clearly distinguish the per-cpu pages structure (pcp) from the vmstats (pzstats). Superficially, this appears to increase the size of the per_cpu_pages structure but the movement of expire fills a structure hole so there is no impact overall. [[email protected]: make it W=1 cleaner] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: make it W=1 even cleaner] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: check struct per_cpu_zonestat has a non-zero size] [[email protected]: Init zone->per_cpu_zonestats properly] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mel Gorman <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Chuck Lever <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent a0b8200 commit 28f836b

File tree

4 files changed

+113
-96
lines changed

4 files changed

+113
-96
lines changed

include/linux/mmzone.h

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -341,20 +341,21 @@ struct per_cpu_pages {
341341
int count; /* number of pages in the list */
342342
int high; /* high watermark, emptying needed */
343343
int batch; /* chunk size for buddy add/remove */
344+
#ifdef CONFIG_NUMA
345+
int expire; /* When 0, remote pagesets are drained */
346+
#endif
344347

345348
/* Lists of pages, one per migrate type stored on the pcp-lists */
346349
struct list_head lists[MIGRATE_PCPTYPES];
347350
};
348351

349-
struct per_cpu_pageset {
350-
struct per_cpu_pages pcp;
351-
#ifdef CONFIG_NUMA
352-
s8 expire;
353-
u16 vm_numa_stat_diff[NR_VM_NUMA_STAT_ITEMS];
354-
#endif
352+
struct per_cpu_zonestat {
355353
#ifdef CONFIG_SMP
356-
s8 stat_threshold;
357354
s8 vm_stat_diff[NR_VM_ZONE_STAT_ITEMS];
355+
s8 stat_threshold;
356+
#endif
357+
#ifdef CONFIG_NUMA
358+
u16 vm_numa_stat_diff[NR_VM_NUMA_STAT_ITEMS];
358359
#endif
359360
};
360361

@@ -484,7 +485,8 @@ struct zone {
484485
int node;
485486
#endif
486487
struct pglist_data *zone_pgdat;
487-
struct per_cpu_pageset __percpu *pageset;
488+
struct per_cpu_pages __percpu *per_cpu_pageset;
489+
struct per_cpu_zonestat __percpu *per_cpu_zonestats;
488490
/*
489491
* the high and batch values are copied to individual pagesets for
490492
* faster access

include/linux/vmstat.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ static inline unsigned long zone_numa_state_snapshot(struct zone *zone,
163163
int cpu;
164164

165165
for_each_online_cpu(cpu)
166-
x += per_cpu_ptr(zone->pageset, cpu)->vm_numa_stat_diff[item];
166+
x += per_cpu_ptr(zone->per_cpu_zonestats, cpu)->vm_numa_stat_diff[item];
167167

168168
return x;
169169
}
@@ -236,7 +236,7 @@ static inline unsigned long zone_page_state_snapshot(struct zone *zone,
236236
#ifdef CONFIG_SMP
237237
int cpu;
238238
for_each_online_cpu(cpu)
239-
x += per_cpu_ptr(zone->pageset, cpu)->vm_stat_diff[item];
239+
x += per_cpu_ptr(zone->per_cpu_zonestats, cpu)->vm_stat_diff[item];
240240

241241
if (x < 0)
242242
x = 0;
@@ -291,7 +291,7 @@ struct ctl_table;
291291
int vmstat_refresh(struct ctl_table *, int write, void *buffer, size_t *lenp,
292292
loff_t *ppos);
293293

294-
void drain_zonestat(struct zone *zone, struct per_cpu_pageset *);
294+
void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *);
295295

296296
int calculate_pressure_threshold(struct zone *zone);
297297
int calculate_normal_threshold(struct zone *zone);
@@ -399,7 +399,7 @@ static inline void cpu_vm_stats_fold(int cpu) { }
399399
static inline void quiet_vmstat(void) { }
400400

401401
static inline void drain_zonestat(struct zone *zone,
402-
struct per_cpu_pageset *pset) { }
402+
struct per_cpu_zonestat *pzstats) { }
403403
#endif /* CONFIG_SMP */
404404

405405
static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,

mm/page_alloc.c

Lines changed: 47 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -3026,15 +3026,14 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
30263026
static void drain_pages_zone(unsigned int cpu, struct zone *zone)
30273027
{
30283028
unsigned long flags;
3029-
struct per_cpu_pageset *pset;
30303029
struct per_cpu_pages *pcp;
30313030

30323031
local_irq_save(flags);
3033-
pset = per_cpu_ptr(zone->pageset, cpu);
30343032

3035-
pcp = &pset->pcp;
3033+
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
30363034
if (pcp->count)
30373035
free_pcppages_bulk(zone, pcp->count, pcp);
3036+
30383037
local_irq_restore(flags);
30393038
}
30403039

@@ -3133,7 +3132,7 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
31333132
* disables preemption as part of its processing
31343133
*/
31353134
for_each_online_cpu(cpu) {
3136-
struct per_cpu_pageset *pcp;
3135+
struct per_cpu_pages *pcp;
31373136
struct zone *z;
31383137
bool has_pcps = false;
31393138

@@ -3144,13 +3143,13 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
31443143
*/
31453144
has_pcps = true;
31463145
} else if (zone) {
3147-
pcp = per_cpu_ptr(zone->pageset, cpu);
3148-
if (pcp->pcp.count)
3146+
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
3147+
if (pcp->count)
31493148
has_pcps = true;
31503149
} else {
31513150
for_each_populated_zone(z) {
3152-
pcp = per_cpu_ptr(z->pageset, cpu);
3153-
if (pcp->pcp.count) {
3151+
pcp = per_cpu_ptr(z->per_cpu_pageset, cpu);
3152+
if (pcp->count) {
31543153
has_pcps = true;
31553154
break;
31563155
}
@@ -3280,7 +3279,7 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
32803279
migratetype = MIGRATE_MOVABLE;
32813280
}
32823281

3283-
pcp = &this_cpu_ptr(zone->pageset)->pcp;
3282+
pcp = this_cpu_ptr(zone->per_cpu_pageset);
32843283
list_add(&page->lru, &pcp->lists[migratetype]);
32853284
pcp->count++;
32863285
if (pcp->count >= READ_ONCE(pcp->high))
@@ -3496,7 +3495,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
34963495
unsigned long flags;
34973496

34983497
local_irq_save(flags);
3499-
pcp = &this_cpu_ptr(zone->pageset)->pcp;
3498+
pcp = this_cpu_ptr(zone->per_cpu_pageset);
35003499
list = &pcp->lists[migratetype];
35013500
page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list);
35023501
if (page) {
@@ -5105,7 +5104,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
51055104

51065105
/* Attempt the batch allocation */
51075106
local_irq_save(flags);
5108-
pcp = &this_cpu_ptr(zone->pageset)->pcp;
5107+
pcp = this_cpu_ptr(zone->per_cpu_pageset);
51095108
pcp_list = &pcp->lists[ac.migratetype];
51105109

51115110
while (nr_populated < nr_pages) {
@@ -5720,7 +5719,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
57205719
continue;
57215720

57225721
for_each_online_cpu(cpu)
5723-
free_pcp += per_cpu_ptr(zone->pageset, cpu)->pcp.count;
5722+
free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
57245723
}
57255724

57265725
printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n"
@@ -5812,7 +5811,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
58125811

58135812
free_pcp = 0;
58145813
for_each_online_cpu(cpu)
5815-
free_pcp += per_cpu_ptr(zone->pageset, cpu)->pcp.count;
5814+
free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
58165815

58175816
show_node(zone);
58185817
printk(KERN_CONT
@@ -5853,7 +5852,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
58535852
K(zone_page_state(zone, NR_MLOCK)),
58545853
K(zone_page_state(zone, NR_BOUNCE)),
58555854
K(free_pcp),
5856-
K(this_cpu_read(zone->pageset->pcp.count)),
5855+
K(this_cpu_read(zone->per_cpu_pageset->count)),
58575856
K(zone_page_state(zone, NR_FREE_CMA_PAGES)));
58585857
printk("lowmem_reserve[]:");
58595858
for (i = 0; i < MAX_NR_ZONES; i++)
@@ -6180,11 +6179,12 @@ static void build_zonelists(pg_data_t *pgdat)
61806179
* not check if the processor is online before following the pageset pointer.
61816180
* Other parts of the kernel may not check if the zone is available.
61826181
*/
6183-
static void pageset_init(struct per_cpu_pageset *p);
6182+
static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonestat *pzstats);
61846183
/* These effectively disable the pcplists in the boot pageset completely */
61856184
#define BOOT_PAGESET_HIGH 0
61866185
#define BOOT_PAGESET_BATCH 1
6187-
static DEFINE_PER_CPU(struct per_cpu_pageset, boot_pageset);
6186+
static DEFINE_PER_CPU(struct per_cpu_pages, boot_pageset);
6187+
static DEFINE_PER_CPU(struct per_cpu_zonestat, boot_zonestats);
61886188
static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats);
61896189

61906190
static void __build_all_zonelists(void *data)
@@ -6251,7 +6251,7 @@ build_all_zonelists_init(void)
62516251
* (a chicken-egg dilemma).
62526252
*/
62536253
for_each_possible_cpu(cpu)
6254-
pageset_init(&per_cpu(boot_pageset, cpu));
6254+
per_cpu_pages_init(&per_cpu(boot_pageset, cpu), &per_cpu(boot_zonestats, cpu));
62556255

62566256
mminit_verify_zonelist();
62576257
cpuset_init_current_mems_allowed();
@@ -6650,14 +6650,13 @@ static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
66506650
WRITE_ONCE(pcp->high, high);
66516651
}
66526652

6653-
static void pageset_init(struct per_cpu_pageset *p)
6653+
static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonestat *pzstats)
66546654
{
6655-
struct per_cpu_pages *pcp;
66566655
int migratetype;
66576656

6658-
memset(p, 0, sizeof(*p));
6657+
memset(pcp, 0, sizeof(*pcp));
6658+
memset(pzstats, 0, sizeof(*pzstats));
66596659

6660-
pcp = &p->pcp;
66616660
for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++)
66626661
INIT_LIST_HEAD(&pcp->lists[migratetype]);
66636662

@@ -6674,12 +6673,12 @@ static void pageset_init(struct per_cpu_pageset *p)
66746673
static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high,
66756674
unsigned long batch)
66766675
{
6677-
struct per_cpu_pageset *p;
6676+
struct per_cpu_pages *pcp;
66786677
int cpu;
66796678

66806679
for_each_possible_cpu(cpu) {
6681-
p = per_cpu_ptr(zone->pageset, cpu);
6682-
pageset_update(&p->pcp, high, batch);
6680+
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
6681+
pageset_update(pcp, high, batch);
66836682
}
66846683
}
66856684

@@ -6714,13 +6713,20 @@ static void zone_set_pageset_high_and_batch(struct zone *zone)
67146713

67156714
void __meminit setup_zone_pageset(struct zone *zone)
67166715
{
6717-
struct per_cpu_pageset *p;
67186716
int cpu;
67196717

6720-
zone->pageset = alloc_percpu(struct per_cpu_pageset);
6718+
/* Size may be 0 on !SMP && !NUMA */
6719+
if (sizeof(struct per_cpu_zonestat) > 0)
6720+
zone->per_cpu_zonestats = alloc_percpu(struct per_cpu_zonestat);
6721+
6722+
zone->per_cpu_pageset = alloc_percpu(struct per_cpu_pages);
67216723
for_each_possible_cpu(cpu) {
6722-
p = per_cpu_ptr(zone->pageset, cpu);
6723-
pageset_init(p);
6724+
struct per_cpu_pages *pcp;
6725+
struct per_cpu_zonestat *pzstats;
6726+
6727+
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
6728+
pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
6729+
per_cpu_pages_init(pcp, pzstats);
67246730
}
67256731

67266732
zone_set_pageset_high_and_batch(zone);
@@ -6747,9 +6753,9 @@ void __init setup_per_cpu_pageset(void)
67476753
* the nodes these zones are associated with.
67486754
*/
67496755
for_each_possible_cpu(cpu) {
6750-
struct per_cpu_pageset *pcp = &per_cpu(boot_pageset, cpu);
6751-
memset(pcp->vm_numa_stat_diff, 0,
6752-
sizeof(pcp->vm_numa_stat_diff));
6756+
struct per_cpu_zonestat *pzstats = &per_cpu(boot_zonestats, cpu);
6757+
memset(pzstats->vm_numa_stat_diff, 0,
6758+
sizeof(pzstats->vm_numa_stat_diff));
67536759
}
67546760
#endif
67556761

@@ -6765,7 +6771,8 @@ static __meminit void zone_pcp_init(struct zone *zone)
67656771
* relies on the ability of the linker to provide the
67666772
* offset of a (static) per cpu variable into the per cpu area.
67676773
*/
6768-
zone->pageset = &boot_pageset;
6774+
zone->per_cpu_pageset = &boot_pageset;
6775+
zone->per_cpu_zonestats = &boot_zonestats;
67696776
zone->pageset_high = BOOT_PAGESET_HIGH;
67706777
zone->pageset_batch = BOOT_PAGESET_BATCH;
67716778

@@ -9046,15 +9053,17 @@ void zone_pcp_enable(struct zone *zone)
90469053
void zone_pcp_reset(struct zone *zone)
90479054
{
90489055
int cpu;
9049-
struct per_cpu_pageset *pset;
9056+
struct per_cpu_zonestat *pzstats;
90509057

9051-
if (zone->pageset != &boot_pageset) {
9058+
if (zone->per_cpu_pageset != &boot_pageset) {
90529059
for_each_online_cpu(cpu) {
9053-
pset = per_cpu_ptr(zone->pageset, cpu);
9054-
drain_zonestat(zone, pset);
9060+
pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
9061+
drain_zonestat(zone, pzstats);
90559062
}
9056-
free_percpu(zone->pageset);
9057-
zone->pageset = &boot_pageset;
9063+
free_percpu(zone->per_cpu_pageset);
9064+
free_percpu(zone->per_cpu_zonestats);
9065+
zone->per_cpu_pageset = &boot_pageset;
9066+
zone->per_cpu_zonestats = &boot_zonestats;
90589067
}
90599068
}
90609069

0 commit comments

Comments
 (0)