Skip to content

Commit 95bbc0c

Browse files
Zhihui Zhangtorvalds
authored andcommitted
mm: rename RECLAIM_SWAP to RECLAIM_UNMAP
The name SWAP implies that we are dealing with anonymous pages only. In fact, the original patch that introduced the min_unmapped_ratio logic was to fix an issue related to file pages. Rename it to RECLAIM_UNMAP to match what does. Historically, commit a6dc60f ("vmscan: rename sc.may_swap to may_unmap") renamed .may_swap to .may_unmap, leaving RECLAIM_SWAP behind. commit 2e2e425 ("vmscan,memcg: reintroduce sc->may_swap") reintroduced .may_swap for memory controller. Signed-off-by: Zhihui Zhang <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent f012a84 commit 95bbc0c

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

mm/vmscan.c

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3597,7 +3597,7 @@ int zone_reclaim_mode __read_mostly;
35973597
#define RECLAIM_OFF 0
35983598
#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */
35993599
#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
3600-
#define RECLAIM_SWAP (1<<2) /* Swap pages out during reclaim */
3600+
#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */
36013601

36023602
/*
36033603
* Priority for ZONE_RECLAIM. This determines the fraction of pages
@@ -3639,12 +3639,12 @@ static long zone_pagecache_reclaimable(struct zone *zone)
36393639
long delta = 0;
36403640

36413641
/*
3642-
* If RECLAIM_SWAP is set, then all file pages are considered
3642+
* If RECLAIM_UNMAP is set, then all file pages are considered
36433643
* potentially reclaimable. Otherwise, we have to worry about
36443644
* pages like swapcache and zone_unmapped_file_pages() provides
36453645
* a better estimate
36463646
*/
3647-
if (zone_reclaim_mode & RECLAIM_SWAP)
3647+
if (zone_reclaim_mode & RECLAIM_UNMAP)
36483648
nr_pagecache_reclaimable = zone_page_state(zone, NR_FILE_PAGES);
36493649
else
36503650
nr_pagecache_reclaimable = zone_unmapped_file_pages(zone);
@@ -3675,15 +3675,15 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
36753675
.order = order,
36763676
.priority = ZONE_RECLAIM_PRIORITY,
36773677
.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
3678-
.may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
3678+
.may_unmap = !!(zone_reclaim_mode & RECLAIM_UNMAP),
36793679
.may_swap = 1,
36803680
};
36813681

36823682
cond_resched();
36833683
/*
3684-
* We need to be able to allocate from the reserves for RECLAIM_SWAP
3684+
* We need to be able to allocate from the reserves for RECLAIM_UNMAP
36853685
* and we also need to be able to write out pages for RECLAIM_WRITE
3686-
* and RECLAIM_SWAP.
3686+
* and RECLAIM_UNMAP.
36873687
*/
36883688
p->flags |= PF_MEMALLOC | PF_SWAPWRITE;
36893689
lockdep_set_current_reclaim_state(gfp_mask);

0 commit comments

Comments
 (0)