Skip to content

Commit 9765ad1

Browse files
dgibsonmpe
authored andcommitted
powerpc/mm: Ensure IRQs are off in switch_mm()
powerpc expects IRQs to already be (soft) disabled when switch_mm() is called, as made clear in the commit message of 9c1e105 ("powerpc: Allow perf_counters to access user memory at interrupt time"). Aside from any race conditions that might exist between switch_mm() and an IRQ, there is also an unconditional hard_irq_disable() in switch_slb(). If that isn't followed at some point by an IRQ enable then interrupts will remain disabled until we return to userspace. It is true that when switch_mm() is called from the scheduler IRQs are off, but not when it's called by use_mm(). Looking closer we see that last year in commit f98db60 ("sched/core: Add switch_mm_irqs_off() and use it in the scheduler") this was made more explicit by the addition of switch_mm_irqs_off() which is now called by the scheduler, vs switch_mm() which is used by use_mm(). Arguably it is a bug in use_mm() to call switch_mm() in a different context than it expects, but fixing that will take time. This was discovered recently when vhost started throwing warnings such as: BUG: sleeping function called from invalid context at kernel/mutex.c:578 in_atomic(): 0, irqs_disabled(): 1, pid: 10768, name: vhost-10760 no locks held by vhost-10760/10768. irq event stamp: 10 hardirqs last enabled at (9): _raw_spin_unlock_irq+0x40/0x80 hardirqs last disabled at (10): switch_slb+0x2e4/0x490 softirqs last enabled at (0): copy_process+0x5e8/0x1260 softirqs last disabled at (0): (null) Call Trace: show_stack+0x88/0x390 (unreliable) dump_stack+0x30/0x44 __might_sleep+0x1c4/0x2d0 mutex_lock_nested+0x74/0x5c0 cgroup_attach_task_all+0x5c/0x180 vhost_attach_cgroups_work+0x58/0x80 [vhost] vhost_worker+0x24c/0x3d0 [vhost] kthread+0xec/0x100 ret_from_kernel_thread+0x5c/0xd4 Prior to commit 04b96e5 ("vhost: lockless enqueuing") (Aug 2016) the vhost_worker() would do a spin_unlock_irq() not long after calling use_mm(), which had the effect of reenabling IRQs. Since that commit removed the locking in vhost_worker() the body of the vhost_worker() loop now runs with interrupts off causing the warnings. This patch addresses the problem by making the powerpc code mirror the x86 code, ie. we disable interrupts in switch_mm(), and optimise the scheduler case by defining switch_mm_irqs_off(). Cc: [email protected] # v4.7+ Signed-off-by: David Gibson <[email protected]> [mpe: Flesh out/rewrite change log, add stable] Signed-off-by: Michael Ellerman <[email protected]>
1 parent e76ca27 commit 9765ad1

File tree

1 file changed

+15
-2
lines changed

1 file changed

+15
-2
lines changed

arch/powerpc/include/asm/mmu_context.h

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,9 @@ extern void drop_cop(unsigned long acop, struct mm_struct *mm);
7171
* switch_mm is the entry point called from the architecture independent
7272
* code in kernel/sched/core.c
7373
*/
74-
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
75-
struct task_struct *tsk)
74+
static inline void switch_mm_irqs_off(struct mm_struct *prev,
75+
struct mm_struct *next,
76+
struct task_struct *tsk)
7677
{
7778
/* Mark this context has been used on the new CPU */
7879
if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(next)))
@@ -111,6 +112,18 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
111112
switch_mmu_context(prev, next, tsk);
112113
}
113114

115+
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
116+
struct task_struct *tsk)
117+
{
118+
unsigned long flags;
119+
120+
local_irq_save(flags);
121+
switch_mm_irqs_off(prev, next, tsk);
122+
local_irq_restore(flags);
123+
}
124+
#define switch_mm_irqs_off switch_mm_irqs_off
125+
126+
114127
#define deactivate_mm(tsk,mm) do { } while (0)
115128

116129
/*

0 commit comments

Comments
 (0)