Skip to content

Commit a2d7f8e

Browse files
hnaztorvalds
authored andcommitted
mm: don't avoid high-priority reclaim on unreclaimable nodes
Commit 246e87a ("memcg: fix get_scan_count() for small targets") sought to avoid high reclaim priorities for kswapd by forcing it to scan a minimum amount of pages when lru_pages >> priority yielded nothing. Commit b95a2f2 ("mm: vmscan: convert global reclaim to per-memcg LRU lists"), due to switching global reclaim to a round-robin scheme over all cgroups, had to restrict this forceful behavior to unreclaimable zones in order to prevent massive overreclaim with many cgroups. The latter patch effectively neutered the behavior completely for all but extreme memory pressure. But in those situations we might as well drop the reclaimers to lower priority levels. Remove the check. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Johannes Weiner <[email protected]> Acked-by: Hillf Danton <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Jia He <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 15038d0 commit a2d7f8e

File tree

1 file changed

+5
-14
lines changed

1 file changed

+5
-14
lines changed

mm/vmscan.c

Lines changed: 5 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2130,22 +2130,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
21302130
int pass;
21312131

21322132
/*
2133-
* If the zone or memcg is small, nr[l] can be 0. This
2134-
* results in no scanning on this priority and a potential
2135-
* priority drop. Global direct reclaim can go to the next
2136-
* zone and tends to have no problems. Global kswapd is for
2137-
* zone balancing and it needs to scan a minimum amount. When
2133+
* If the zone or memcg is small, nr[l] can be 0. When
21382134
* reclaiming for a memcg, a priority drop can cause high
2139-
* latencies, so it's better to scan a minimum amount there as
2140-
* well.
2135+
* latencies, so it's better to scan a minimum amount. When a
2136+
* cgroup has already been deleted, scrape out the remaining
2137+
* cache forcefully to get rid of the lingering state.
21412138
*/
2142-
if (current_is_kswapd()) {
2143-
if (!pgdat_reclaimable(pgdat))
2144-
force_scan = true;
2145-
if (!mem_cgroup_online(memcg))
2146-
force_scan = true;
2147-
}
2148-
if (!global_reclaim(sc))
2139+
if (!global_reclaim(sc) || !mem_cgroup_online(memcg))
21492140
force_scan = true;
21502141

21512142
/* If we have no swap space, do not bother scanning anon pages. */

0 commit comments

Comments
 (0)