Skip to content

Commit 7f18825

Browse files
kvaneeshtorvalds
authored andcommitted
powerpc/mm/iommu: allow large IOMMU page size only for hugetlb backing
THP pages can get split during different code paths. An incremented reference count does imply we will not split the compound page. But the pmd entry can be converted to level 4 pte entries. Keep the code simpler by allowing large IOMMU page size only if the guest ram is backed by hugetlb pages. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Aneesh Kumar K.V <[email protected]> Cc: Alexey Kardashevskiy <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Gibson <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 678e174 commit 7f18825

File tree

1 file changed

+7
-17
lines changed

1 file changed

+7
-17
lines changed

arch/powerpc/mm/mmu_context_iommu.c

Lines changed: 7 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -98,8 +98,6 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
9898
struct mm_iommu_table_group_mem_t *mem;
9999
long i, ret, locked_entries = 0;
100100
unsigned int pageshift;
101-
unsigned long flags;
102-
unsigned long cur_ua;
103101

104102
mutex_lock(&mem_list_mutex);
105103

@@ -167,22 +165,14 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
167165
for (i = 0; i < entries; ++i) {
168166
struct page *page = mem->hpages[i];
169167

170-
cur_ua = ua + (i << PAGE_SHIFT);
171-
if (mem->pageshift > PAGE_SHIFT && PageCompound(page)) {
172-
pte_t *pte;
168+
/*
169+
* Allow to use larger than 64k IOMMU pages. Only do that
170+
* if we are backed by hugetlb.
171+
*/
172+
if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) {
173173
struct page *head = compound_head(page);
174-
unsigned int compshift = compound_order(head);
175-
unsigned int pteshift;
176-
177-
local_irq_save(flags); /* disables as well */
178-
pte = find_linux_pte(mm->pgd, cur_ua, NULL, &pteshift);
179-
180-
/* Double check it is still the same pinned page */
181-
if (pte && pte_page(*pte) == head &&
182-
pteshift == compshift + PAGE_SHIFT)
183-
pageshift = max_t(unsigned int, pteshift,
184-
PAGE_SHIFT);
185-
local_irq_restore(flags);
174+
175+
pageshift = compound_order(head) + PAGE_SHIFT;
186176
}
187177
mem->pageshift = min(mem->pageshift, pageshift);
188178
/*

0 commit comments

Comments
 (0)