Skip to content

Commit 1dd1eff

Browse files
ZqiangKAGA-KOKO
authored andcommitted
softirq: Fix suspicious RCU usage in __do_softirq()
Currently, the condition "__this_cpu_read(ksoftirqd) == current" is used to invoke rcu_softirq_qs() in ksoftirqd tasks context for non-RT kernels. This works correctly as long as the context is actually task context but this condition is wrong when: - the current task is ksoftirqd - the task is interrupted in a RCU read side critical section - __do_softirq() is invoked on return from interrupt Syzkaller triggered the following scenario: -> finish_task_switch() -> put_task_struct_rcu_user() -> call_rcu(&task->rcu, delayed_put_task_struct) -> __kasan_record_aux_stack() -> pfn_valid() -> rcu_read_lock_sched() <interrupt> __irq_exit_rcu() -> __do_softirq)() -> if (!IS_ENABLED(CONFIG_PREEMPT_RT) && __this_cpu_read(ksoftirqd) == current) -> rcu_softirq_qs() -> RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map)) The rcu quiescent state is reported in the rcu-read critical section, so the lockdep warning is triggered. Fix this by splitting out the inner working of __do_softirq() into a helper function which takes an argument to distinguish between ksoftirqd task context and interrupted context and invoke it from the relevant call sites with the proper context information and use that for the conditional invocation of rcu_softirq_qs(). Reported-by: [email protected] Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Zqiang <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected] Link: https://lore.kernel.org/lkml/8f281a10-b85a-4586-9586-5bbc12dc784f@paulmck-laptop/T/#mea8aba4abfcb97bbf499d169ce7f30c4cff1b0e3
1 parent c26591a commit 1dd1eff

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

kernel/softirq.c

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -508,7 +508,7 @@ static inline bool lockdep_softirq_start(void) { return false; }
508508
static inline void lockdep_softirq_end(bool in_hardirq) { }
509509
#endif
510510

511-
asmlinkage __visible void __softirq_entry __do_softirq(void)
511+
static void handle_softirqs(bool ksirqd)
512512
{
513513
unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
514514
unsigned long old_flags = current->flags;
@@ -563,8 +563,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
563563
pending >>= softirq_bit;
564564
}
565565

566-
if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
567-
__this_cpu_read(ksoftirqd) == current)
566+
if (!IS_ENABLED(CONFIG_PREEMPT_RT) && ksirqd)
568567
rcu_softirq_qs();
569568

570569
local_irq_disable();
@@ -584,6 +583,11 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
584583
current_restore_flags(old_flags, PF_MEMALLOC);
585584
}
586585

586+
asmlinkage __visible void __softirq_entry __do_softirq(void)
587+
{
588+
handle_softirqs(false);
589+
}
590+
587591
/**
588592
* irq_enter_rcu - Enter an interrupt context with RCU watching
589593
*/
@@ -921,7 +925,7 @@ static void run_ksoftirqd(unsigned int cpu)
921925
* We can safely run softirq on inline stack, as we are not deep
922926
* in the task stack here.
923927
*/
924-
__do_softirq();
928+
handle_softirqs(true);
925929
ksoftirqd_run_end();
926930
cond_resched();
927931
return;

0 commit comments

Comments
 (0)