Skip to content

Commit 6d6ea1e

Browse files
drinkcattorvalds
authored andcommitted
mm: add support for kmem caches in DMA32 zone
Patch series "iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables", v6. This is a followup to the discussion in [1], [2]. IOMMUs using ARMv7 short-descriptor format require page tables (level 1 and 2) to be allocated within the first 4GB of RAM, even on 64-bit systems. For L1 tables that are bigger than a page, we can just use __get_free_pages with GFP_DMA32 (on arm64 systems only, arm would still use GFP_DMA). For L2 tables that only take 1KB, it would be a waste to allocate a full page, so we considered 3 approaches: 1. This series, adding support for GFP_DMA32 slab caches. 2. genalloc, which requires pre-allocating the maximum number of L2 page tables (4096, so 4MB of memory). 3. page_frag, which is not very memory-efficient as it is unable to reuse freed fragments until the whole page is freed. [3] This series is the most memory-efficient approach. stable@ note: We confirmed that this is a regression, and IOMMU errors happen on 4.19 and linux-next/master on MT8173 (elm, Acer Chromebook R13). The issue most likely starts from commit ad67f5a ("arm64: replace ZONE_DMA with ZONE_DMA32"), i.e. 4.15, and presumably breaks a number of Mediatek platforms (and maybe others?). [1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html [2] https://lists.linuxfoundation.org/pipermail/iommu/2018-December/031696.html [3] https://patchwork.codeaurora.org/patch/671639/ This patch (of 3): IOMMUs using ARMv7 short-descriptor format require page tables to be allocated within the first 4GB of RAM, even on 64-bit systems. On arm64, this is done by passing GFP_DMA32 flag to memory allocation functions. For IOMMU L2 tables that only take 1KB, it would be a waste to allocate a full page using get_free_pages, so we considered 3 approaches: 1. This patch, adding support for GFP_DMA32 slab caches. 2. genalloc, which requires pre-allocating the maximum number of L2 page tables (4096, so 4MB of memory). 3. page_frag, which is not very memory-efficient as it is unable to reuse freed fragments until the whole page is freed. This change makes it possible to create a custom cache in DMA32 zone using kmem_cache_create, then allocate memory using kmem_cache_alloc. We do not create a DMA32 kmalloc cache array, as there are currently no users of kmalloc(..., GFP_DMA32). These calls will continue to trigger a warning, as we keep GFP_DMA32 in GFP_SLAB_BUG_MASK. This implies that calls to kmem_cache_*alloc on a SLAB_CACHE_DMA32 kmem_cache must _not_ use GFP_DMA32 (it is anyway redundant and unnecessary). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Nicolas Boichat <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Robin Murphy <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Sasha Levin <[email protected]> Cc: Huaisheng Ye <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Yong Wu <[email protected]> Cc: Matthias Brugger <[email protected]> Cc: Tomasz Figa <[email protected]> Cc: Yingjoe Chen <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Hsin-Yi Wang <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent e6a9467 commit 6d6ea1e

File tree

5 files changed

+12
-2
lines changed

5 files changed

+12
-2
lines changed

include/linux/slab.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,8 @@
3232
#define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U)
3333
/* Use GFP_DMA memory */
3434
#define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U)
35+
/* Use GFP_DMA32 memory */
36+
#define SLAB_CACHE_DMA32 ((slab_flags_t __force)0x00008000U)
3537
/* DEBUG: Store the last owner for bug hunting */
3638
#define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U)
3739
/* Panic if kmem_cache_create() fails */

mm/slab.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2115,6 +2115,8 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
21152115
cachep->allocflags = __GFP_COMP;
21162116
if (flags & SLAB_CACHE_DMA)
21172117
cachep->allocflags |= GFP_DMA;
2118+
if (flags & SLAB_CACHE_DMA32)
2119+
cachep->allocflags |= GFP_DMA32;
21182120
if (flags & SLAB_RECLAIM_ACCOUNT)
21192121
cachep->allocflags |= __GFP_RECLAIMABLE;
21202122
cachep->size = size;

mm/slab.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
127127

128128

129129
/* Legal flag mask for kmem_cache_create(), for various configurations */
130-
#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
130+
#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
131+
SLAB_CACHE_DMA32 | SLAB_PANIC | \
131132
SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
132133

133134
#if defined(CONFIG_DEBUG_SLAB)

mm/slab_common.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
5353
SLAB_FAILSLAB | SLAB_KASAN)
5454

5555
#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
56-
SLAB_ACCOUNT)
56+
SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
5757

5858
/*
5959
* Merge control. If this is set then no merging of slab caches will occur.

mm/slub.c

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3589,6 +3589,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
35893589
if (s->flags & SLAB_CACHE_DMA)
35903590
s->allocflags |= GFP_DMA;
35913591

3592+
if (s->flags & SLAB_CACHE_DMA32)
3593+
s->allocflags |= GFP_DMA32;
3594+
35923595
if (s->flags & SLAB_RECLAIM_ACCOUNT)
35933596
s->allocflags |= __GFP_RECLAIMABLE;
35943597

@@ -5679,6 +5682,8 @@ static char *create_unique_id(struct kmem_cache *s)
56795682
*/
56805683
if (s->flags & SLAB_CACHE_DMA)
56815684
*p++ = 'd';
5685+
if (s->flags & SLAB_CACHE_DMA32)
5686+
*p++ = 'D';
56825687
if (s->flags & SLAB_RECLAIM_ACCOUNT)
56835688
*p++ = 'a';
56845689
if (s->flags & SLAB_CONSISTENCY_CHECKS)

0 commit comments

Comments
 (0)