Skip to content

Commit 8e998fc

Browse files
joergroedelKAGA-KOKO
authored andcommitted
x86/mm: Sync also unmappings in vmalloc_sync_all()
With huge-page ioremap areas the unmappings also need to be synced between all page-tables. Otherwise it can cause data corruption when a region is unmapped and later re-used. Make the vmalloc_sync_one() function ready to sync unmappings and make sure vmalloc_sync_all() iterates over all page-tables even when an unmapped PMD is found. Fixes: 5d72b4f ('x86, mm: support huge I/O mapping capability I/F') Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 51b75b5 commit 8e998fc

File tree

1 file changed

+5
-8
lines changed

1 file changed

+5
-8
lines changed

arch/x86/mm/fault.c

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -177,11 +177,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
177177

178178
pmd = pmd_offset(pud, address);
179179
pmd_k = pmd_offset(pud_k, address);
180-
if (!pmd_present(*pmd_k))
181-
return NULL;
182180

183-
if (!pmd_present(*pmd))
181+
if (pmd_present(*pmd) != pmd_present(*pmd_k))
184182
set_pmd(pmd, *pmd_k);
183+
184+
if (!pmd_present(*pmd_k))
185+
return NULL;
185186
else
186187
BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
187188

@@ -203,17 +204,13 @@ void vmalloc_sync_all(void)
203204
spin_lock(&pgd_lock);
204205
list_for_each_entry(page, &pgd_list, lru) {
205206
spinlock_t *pgt_lock;
206-
pmd_t *ret;
207207

208208
/* the pgt_lock only for Xen */
209209
pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
210210

211211
spin_lock(pgt_lock);
212-
ret = vmalloc_sync_one(page_address(page), address);
212+
vmalloc_sync_one(page_address(page), address);
213213
spin_unlock(pgt_lock);
214-
215-
if (!ret)
216-
break;
217214
}
218215
spin_unlock(&pgd_lock);
219216
}

0 commit comments

Comments
 (0)