Skip to content

Commit 6867c7a

Browse files
T.J. Mercierakpm00
authored andcommitted
mm: multi-gen LRU: don't spin during memcg release
When a memcg is in the process of being released mem_cgroup_tryget will fail because its reference count has already reached 0. This can happen during reclaim if the memcg has already been offlined, and we reclaim all remaining pages attributed to the offlined memcg. shrink_many attempts to skip the empty memcg in this case, and continue reclaiming from the remaining memcgs in the old generation. If there is only one memcg remaining, or if all remaining memcgs are in the process of being released then shrink_many will spin until all memcgs have finished being released. The release occurs through a workqueue, so it can take a while before kswapd is able to make any further progress. This fix results in reductions in kswapd activity and direct reclaim in a test where 28 apps (working set size > total memory) are repeatedly launched in a random sequence: A B delta ratio(%) allocstall_movable 5962 3539 -2423 -40.64 allocstall_normal 2661 2417 -244 -9.17 kswapd_high_wmark_hit_quickly 53152 7594 -45558 -85.71 pageoutrun 57365 11750 -45615 -79.52 Link: https://lkml.kernel.org/r/[email protected] Fixes: e4dde56 ("mm: multi-gen LRU: per-node lru_gen_folio lists") Signed-off-by: T.J. Mercier <[email protected]> Acked-by: Yu Zhao <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent e2c1ab0 commit 6867c7a

File tree

1 file changed

+8
-5
lines changed

1 file changed

+8
-5
lines changed

mm/vmscan.c

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4854,16 +4854,17 @@ void lru_gen_release_memcg(struct mem_cgroup *memcg)
48544854

48554855
spin_lock_irq(&pgdat->memcg_lru.lock);
48564856

4857-
VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list));
4857+
if (hlist_nulls_unhashed(&lruvec->lrugen.list))
4858+
goto unlock;
48584859

48594860
gen = lruvec->lrugen.gen;
48604861

4861-
hlist_nulls_del_rcu(&lruvec->lrugen.list);
4862+
hlist_nulls_del_init_rcu(&lruvec->lrugen.list);
48624863
pgdat->memcg_lru.nr_memcgs[gen]--;
48634864

48644865
if (!pgdat->memcg_lru.nr_memcgs[gen] && gen == get_memcg_gen(pgdat->memcg_lru.seq))
48654866
WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1);
4866-
4867+
unlock:
48674868
spin_unlock_irq(&pgdat->memcg_lru.lock);
48684869
}
48694870
}
@@ -5435,16 +5436,18 @@ static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
54355436
rcu_read_lock();
54365437

54375438
hlist_nulls_for_each_entry_rcu(lrugen, pos, &pgdat->memcg_lru.fifo[gen][bin], list) {
5438-
if (op)
5439+
if (op) {
54395440
lru_gen_rotate_memcg(lruvec, op);
5441+
op = 0;
5442+
}
54405443

54415444
mem_cgroup_put(memcg);
54425445

54435446
lruvec = container_of(lrugen, struct lruvec, lrugen);
54445447
memcg = lruvec_memcg(lruvec);
54455448

54465449
if (!mem_cgroup_tryget(memcg)) {
5447-
op = 0;
5450+
lru_gen_release_memcg(memcg);
54485451
memcg = NULL;
54495452
continue;
54505453
}

0 commit comments

Comments
 (0)