Skip to content

Commit e6c495a

Browse files
Vineet Guptatorvalds
authored andcommitted
mm: fix the TLB range flushed when __tlb_remove_page() runs out of slots
zap_pte_range loops from @addr to @EnD. In the middle, if it runs out of batching slots, TLB entries needs to be flushed for @start to @interim, NOT @interim to @EnD. Since ARC port doesn't use page free batching I can't test it myself but this seems like the right thing to do. Observed this when working on a fix for the issue at thread: http://www.spinics.net/lists/linux-arch/msg21736.html Signed-off-by: Vineet Gupta <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Rik van Riel <[email protected]> Cc: David Rientjes <[email protected]> Cc: Peter Zijlstra <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Max Filippov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent f60e2a9 commit e6c495a

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

mm/memory.c

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1101,6 +1101,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
11011101
spinlock_t *ptl;
11021102
pte_t *start_pte;
11031103
pte_t *pte;
1104+
unsigned long range_start = addr;
11041105

11051106
again:
11061107
init_rss_vec(rss);
@@ -1206,12 +1207,14 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
12061207
force_flush = 0;
12071208

12081209
#ifdef HAVE_GENERIC_MMU_GATHER
1209-
tlb->start = addr;
1210-
tlb->end = end;
1210+
tlb->start = range_start;
1211+
tlb->end = addr;
12111212
#endif
12121213
tlb_flush_mmu(tlb);
1213-
if (addr != end)
1214+
if (addr != end) {
1215+
range_start = addr;
12141216
goto again;
1217+
}
12151218
}
12161219

12171220
return addr;

0 commit comments

Comments
 (0)