Skip to content

Commit c3d5f5f

Browse files
jiangliutorvalds
authored andcommitted
mm: use a dedicated lock to protect totalram_pages and zone->managed_pages
Currently lock_memory_hotplug()/unlock_memory_hotplug() are used to protect totalram_pages and zone->managed_pages. Other than the memory hotplug driver, totalram_pages and zone->managed_pages may also be modified at runtime by other drivers, such as Xen balloon, virtio_balloon etc. For those cases, memory hotplug lock is a little too heavy, so introduce a dedicated lock to protect totalram_pages and zone->managed_pages. Now we have a simplified locking rules totalram_pages and zone->managed_pages as: 1) no locking for read accesses because they are unsigned long. 2) no locking for write accesses at boot time in single-threaded context. 3) serialize write accesses at runtime by acquiring the dedicated managed_page_count_lock. Also adjust zone->managed_pages when freeing reserved pages into the buddy system, to keep totalram_pages and zone->managed_pages in consistence. [[email protected]: don't export adjust_managed_page_count to modules (for now)] Signed-off-by: Jiang Liu <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Minchan Kim <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Cc: <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: David Howells <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jeremy Fitzhardinge <[email protected]> Cc: Jianguo Wu <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kamezawa Hiroyuki <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Tang Chen <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wen Congyang <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: Russell King <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 7b4b2a0 commit c3d5f5f

File tree

3 files changed

+23
-8
lines changed

3 files changed

+23
-8
lines changed

include/linux/mm.h

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1313,6 +1313,7 @@ extern void free_initmem(void);
13131313
*/
13141314
extern unsigned long free_reserved_area(void *start, void *end,
13151315
int poison, char *s);
1316+
13161317
#ifdef CONFIG_HIGHMEM
13171318
/*
13181319
* Free a highmem page into the buddy system, adjusting totalhigh_pages
@@ -1321,10 +1322,7 @@ extern unsigned long free_reserved_area(void *start, void *end,
13211322
extern void free_highmem_page(struct page *page);
13221323
#endif
13231324

1324-
static inline void adjust_managed_page_count(struct page *page, long count)
1325-
{
1326-
totalram_pages += count;
1327-
}
1325+
extern void adjust_managed_page_count(struct page *page, long count);
13281326

13291327
/* Free the reserved page into the buddy system, so it gets managed. */
13301328
static inline void __free_reserved_page(struct page *page)

include/linux/mmzone.h

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -474,10 +474,16 @@ struct zone {
474474
* frequently read in proximity to zone->lock. It's good to
475475
* give them a chance of being in the same cacheline.
476476
*
477-
* Write access to present_pages and managed_pages at runtime should
478-
* be protected by lock_memory_hotplug()/unlock_memory_hotplug().
479-
* Any reader who can't tolerant drift of present_pages and
480-
* managed_pages should hold memory hotplug lock to get a stable value.
477+
* Write access to present_pages at runtime should be protected by
478+
* lock_memory_hotplug()/unlock_memory_hotplug(). Any reader who can't
479+
* tolerant drift of present_pages should hold memory hotplug lock to
480+
* get a stable value.
481+
*
482+
* Read access to managed_pages should be safe because it's unsigned
483+
* long. Write access to zone->managed_pages and totalram_pages are
484+
* protected by managed_page_count_lock at runtime. Idealy only
485+
* adjust_managed_page_count() should be used instead of directly
486+
* touching zone->managed_pages and totalram_pages.
481487
*/
482488
unsigned long spanned_pages;
483489
unsigned long present_pages;

mm/page_alloc.c

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,9 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = {
103103
};
104104
EXPORT_SYMBOL(node_states);
105105

106+
/* Protect totalram_pages and zone->managed_pages */
107+
static DEFINE_SPINLOCK(managed_page_count_lock);
108+
106109
unsigned long totalram_pages __read_mostly;
107110
unsigned long totalreserve_pages __read_mostly;
108111
/*
@@ -5206,6 +5209,14 @@ early_param("movablecore", cmdline_parse_movablecore);
52065209

52075210
#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
52085211

5212+
void adjust_managed_page_count(struct page *page, long count)
5213+
{
5214+
spin_lock(&managed_page_count_lock);
5215+
page_zone(page)->managed_pages += count;
5216+
totalram_pages += count;
5217+
spin_unlock(&managed_page_count_lock);
5218+
}
5219+
52095220
unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
52105221
{
52115222
void *pos;

0 commit comments

Comments
 (0)