Skip to content

Commit ce27374

Browse files
amlutoIngo Molnar
authored andcommitted
x86/mm: Make flush_tlb_mm_range() more predictable
I'm about to rewrite the function almost completely, but first I want to get a functional change out of the way. Currently, if flush_tlb_mm_range() does not flush the local TLB at all, it will never do individual page flushes on remote CPUs. This seems to be an accident, and preserving it will be awkward. Let's change it first so that any regressions in the rewrite will be easier to bisect and so that the rewrite can attempt to change no visible behavior at all. The fix is simple: we can simply avoid short-circuiting the calculation of base_pages_to_flush. As a side effect, this also eliminates a potential corner case: if tlb_single_page_flush_ceiling == TLB_FLUSH_ALL, flush_tlb_mm_range() could have ended up flushing the entire address space one page at a time. Signed-off-by: Andy Lutomirski <[email protected]> Acked-by: Dave Hansen <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Nadav Amit <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/4b29b771d9975aad7154c314534fec235618175a.1492844372.git.luto@kernel.org Signed-off-by: Ingo Molnar <[email protected]>
1 parent 29961b5 commit ce27374

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

arch/x86/mm/tlb.c

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -309,6 +309,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
309309
unsigned long base_pages_to_flush = TLB_FLUSH_ALL;
310310

311311
preempt_disable();
312+
313+
if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
314+
base_pages_to_flush = (end - start) >> PAGE_SHIFT;
315+
if (base_pages_to_flush > tlb_single_page_flush_ceiling)
316+
base_pages_to_flush = TLB_FLUSH_ALL;
317+
312318
if (current->active_mm != mm) {
313319
/* Synchronize with switch_mm. */
314320
smp_mb();
@@ -325,15 +331,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
325331
goto out;
326332
}
327333

328-
if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
329-
base_pages_to_flush = (end - start) >> PAGE_SHIFT;
330-
331334
/*
332335
* Both branches below are implicit full barriers (MOV to CR or
333336
* INVLPG) that synchronize with switch_mm.
334337
*/
335-
if (base_pages_to_flush > tlb_single_page_flush_ceiling) {
336-
base_pages_to_flush = TLB_FLUSH_ALL;
338+
if (base_pages_to_flush == TLB_FLUSH_ALL) {
337339
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
338340
local_flush_tlb();
339341
} else {

0 commit comments

Comments
 (0)