Skip to content

Commit eb33575

Browse files
gormanmRussell King
authored andcommitted
[ARM] Double check memmap is actually valid with a memmap has unexpected holes V2
pfn_valid() is meant to be able to tell if a given PFN has valid memmap associated with it or not. In FLATMEM, it is expected that holes always have valid memmap as long as there is valid PFNs either side of the hole. In SPARSEMEM, it is assumed that a valid section has a memmap for the entire section. However, ARM and maybe other embedded architectures in the future free memmap backing holes to save memory on the assumption the memmap is never used. The page_zone linkages are then broken even though pfn_valid() returns true. A walker of the full memmap must then do this additional check to ensure the memmap they are looking at is sane by making sure the zone and PFN linkages are still valid. This is expensive, but walkers of the full memmap are extremely rare. This was caught before for FLATMEM and hacked around but it hits again for SPARSEMEM because the page_zone linkages can look ok where the PFN linkages are totally screwed. This looks like a hatchet job but the reality is that any clean solution would end up consumning all the memory saved by punching these unexpected holes in the memmap. For example, we tried marking the memmap within the section invalid but the section size exceeds the size of the hole in most cases so pfn_valid() starts returning false where valid memmap exists. Shrinking the size of the section would increase memory consumption offsetting the gains. This patch identifies when an architecture is punching unexpected holes in the memmap that the memory model cannot automatically detect and sets ARCH_HAS_HOLES_MEMORYMODEL. At the moment, this is restricted to EP93xx which is the model sub-architecture this has been reported on but may expand later. When set, walkers of the full memmap must call memmap_valid_within() for each PFN and passing in what it expects the page and zone to be for that PFN. If it finds the linkages to be broken, it assumes the memmap is invalid for that PFN. Signed-off-by: Mel Gorman <[email protected]> Signed-off-by: Russell King <[email protected]>
1 parent e1342f1 commit eb33575

File tree

4 files changed

+48
-18
lines changed

4 files changed

+48
-18
lines changed

arch/arm/Kconfig

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -273,6 +273,7 @@ config ARCH_EP93XX
273273
select HAVE_CLK
274274
select COMMON_CLKDEV
275275
select ARCH_REQUIRE_GPIOLIB
276+
select ARCH_HAS_HOLES_MEMORYMODEL
276277
help
277278
This enables support for the Cirrus EP93xx series of CPUs.
278279

@@ -976,10 +977,9 @@ config OABI_COMPAT
976977
UNPREDICTABLE (in fact it can be predicted that it won't work
977978
at all). If in doubt say Y.
978979

979-
config ARCH_FLATMEM_HAS_HOLES
980+
config ARCH_HAS_HOLES_MEMORYMODEL
980981
bool
981-
default y
982-
depends on FLATMEM
982+
default n
983983

984984
# Discontigmem is deprecated
985985
config ARCH_DISCONTIGMEM_ENABLE

include/linux/mmzone.h

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1097,6 +1097,32 @@ unsigned long __init node_memmap_size_bytes(int, unsigned long, unsigned long);
10971097
#define pfn_valid_within(pfn) (1)
10981098
#endif
10991099

1100+
#ifdef CONFIG_ARCH_HAS_HOLES_MEMORYMODEL
1101+
/*
1102+
* pfn_valid() is meant to be able to tell if a given PFN has valid memmap
1103+
* associated with it or not. In FLATMEM, it is expected that holes always
1104+
* have valid memmap as long as there is valid PFNs either side of the hole.
1105+
* In SPARSEMEM, it is assumed that a valid section has a memmap for the
1106+
* entire section.
1107+
*
1108+
* However, an ARM, and maybe other embedded architectures in the future
1109+
* free memmap backing holes to save memory on the assumption the memmap is
1110+
* never used. The page_zone linkages are then broken even though pfn_valid()
1111+
* returns true. A walker of the full memmap must then do this additional
1112+
* check to ensure the memmap they are looking at is sane by making sure
1113+
* the zone and PFN linkages are still valid. This is expensive, but walkers
1114+
* of the full memmap are extremely rare.
1115+
*/
1116+
int memmap_valid_within(unsigned long pfn,
1117+
struct page *page, struct zone *zone);
1118+
#else
1119+
static inline int memmap_valid_within(unsigned long pfn,
1120+
struct page *page, struct zone *zone)
1121+
{
1122+
return 1;
1123+
}
1124+
#endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
1125+
11001126
#endif /* !__GENERATING_BOUNDS.H */
11011127
#endif /* !__ASSEMBLY__ */
11021128
#endif /* _LINUX_MMZONE_H */

mm/mmzone.c

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66

77

88
#include <linux/stddef.h>
9+
#include <linux/mm.h>
910
#include <linux/mmzone.h>
1011
#include <linux/module.h>
1112

@@ -72,3 +73,17 @@ struct zoneref *next_zones_zonelist(struct zoneref *z,
7273
*zone = zonelist_zone(z);
7374
return z;
7475
}
76+
77+
#ifdef CONFIG_ARCH_HAS_HOLES_MEMORYMODEL
78+
int memmap_valid_within(unsigned long pfn,
79+
struct page *page, struct zone *zone)
80+
{
81+
if (page_to_pfn(page) != pfn)
82+
return 0;
83+
84+
if (page_zone(page) != zone)
85+
return 0;
86+
87+
return 1;
88+
}
89+
#endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */

mm/vmstat.c

Lines changed: 4 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -509,22 +509,11 @@ static void pagetypeinfo_showblockcount_print(struct seq_file *m,
509509
continue;
510510

511511
page = pfn_to_page(pfn);
512-
#ifdef CONFIG_ARCH_FLATMEM_HAS_HOLES
513-
/*
514-
* Ordinarily, memory holes in flatmem still have a valid
515-
* memmap for the PFN range. However, an architecture for
516-
* embedded systems (e.g. ARM) can free up the memmap backing
517-
* holes to save memory on the assumption the memmap is
518-
* never used. The page_zone linkages are then broken even
519-
* though pfn_valid() returns true. Skip the page if the
520-
* linkages are broken. Even if this test passed, the impact
521-
* is that the counters for the movable type are off but
522-
* fragmentation monitoring is likely meaningless on small
523-
* systems.
524-
*/
525-
if (page_zone(page) != zone)
512+
513+
/* Watch for unexpected holes punched in the memmap */
514+
if (!memmap_valid_within(pfn, page, zone))
526515
continue;
527-
#endif
516+
528517
mtype = get_pageblock_migratetype(page);
529518

530519
if (mtype < MIGRATE_TYPES)

0 commit comments

Comments
 (0)