Skip to content

Commit 0b6aa1a

Browse files
kvaneeshmpe
authored andcommitted
powerpc/mm/tlbflush: update the mmu_gather page size while iterating address range
This patch makes sure we update the mmu_gather page size even if we are requesting for a fullmm flush. This avoids triggering VM_WARN_ON in code paths like __tlb_remove_page_size that explicitly check for removing range page size to be same as mmu gather page size. Fixes: 5a60993 ("powerpc/64s/radix: tlb do not flush on page size when fullmm") Signed-off-by: Aneesh Kumar K.V <[email protected]> Acked-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
1 parent fce278a commit 0b6aa1a

File tree

1 file changed

+2
-4
lines changed
  • arch/powerpc/include/asm

1 file changed

+2
-4
lines changed

arch/powerpc/include/asm/tlb.h

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -49,13 +49,11 @@ static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
4949
static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb,
5050
unsigned int page_size)
5151
{
52-
if (tlb->fullmm)
53-
return;
54-
5552
if (!tlb->page_size)
5653
tlb->page_size = page_size;
5754
else if (tlb->page_size != page_size) {
58-
tlb_flush_mmu(tlb);
55+
if (!tlb->fullmm)
56+
tlb_flush_mmu(tlb);
5957
/*
6058
* update the page size after flush for the new
6159
* mmu_gather.

0 commit comments

Comments
 (0)