Skip to content

Commit aafe104

Browse files
committed
tracing: Restructure trace_clock_global() to never block
It was reported that a fix to the ring buffer recursion detection would cause a hung machine when performing suspend / resume testing. The following backtrace was extracted from debugging that case: Call Trace: trace_clock_global+0x91/0xa0 __rb_reserve_next+0x237/0x460 ring_buffer_lock_reserve+0x12a/0x3f0 trace_buffer_lock_reserve+0x10/0x50 __trace_graph_return+0x1f/0x80 trace_graph_return+0xb7/0xf0 ? trace_clock_global+0x91/0xa0 ftrace_return_to_handler+0x8b/0xf0 ? pv_hash+0xa0/0xa0 return_to_handler+0x15/0x30 ? ftrace_graph_caller+0xa0/0xa0 ? trace_clock_global+0x91/0xa0 ? __rb_reserve_next+0x237/0x460 ? ring_buffer_lock_reserve+0x12a/0x3f0 ? trace_event_buffer_lock_reserve+0x3c/0x120 ? trace_event_buffer_reserve+0x6b/0xc0 ? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0 ? dpm_run_callback+0x3b/0xc0 ? pm_ops_is_empty+0x50/0x50 ? platform_get_irq_byname_optional+0x90/0x90 ? trace_device_pm_callback_start+0x82/0xd0 ? dpm_run_callback+0x49/0xc0 With the following RIP: RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200 Since the fix to the recursion detection would allow a single recursion to happen while tracing, this lead to the trace_clock_global() taking a spin lock and then trying to take it again: ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* lock taken */ (something else gets traced by function graph tracer) ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* DEAD LOCK! */ Tracing should *never* block, as it can lead to strange lockups like the above. Restructure the trace_clock_global() code to instead of simply taking a lock to update the recorded "prev_time" simply use it, as two events happening on two different CPUs that calls this at the same time, really doesn't matter which one goes first. Use a trylock to grab the lock for updating the prev_time, and if it fails, simply try again the next time. If it failed to be taken, that means something else is already updating it. Link: https://lkml.kernel.org/r/[email protected] Cc: [email protected] Tested-by: Konstantin Kharlamov <[email protected]> Tested-by: Todd Brandt <[email protected]> Fixes: b02414c ("ring-buffer: Fix recursion protection transitions between interrupt context") # started showing the problem Fixes: 14131f2 ("tracing: implement trace_clock_*() APIs") # where the bug happened Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761 Signed-off-by: Steven Rostedt (VMware) <[email protected]>
1 parent 785e3c0 commit aafe104

File tree

1 file changed

+30
-14
lines changed

1 file changed

+30
-14
lines changed

kernel/trace/trace_clock.c

Lines changed: 30 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -95,33 +95,49 @@ u64 notrace trace_clock_global(void)
9595
{
9696
unsigned long flags;
9797
int this_cpu;
98-
u64 now;
98+
u64 now, prev_time;
9999

100100
raw_local_irq_save(flags);
101101

102102
this_cpu = raw_smp_processor_id();
103-
now = sched_clock_cpu(this_cpu);
103+
104104
/*
105-
* If in an NMI context then dont risk lockups and return the
106-
* cpu_clock() time:
105+
* The global clock "guarantees" that the events are ordered
106+
* between CPUs. But if two events on two different CPUS call
107+
* trace_clock_global at roughly the same time, it really does
108+
* not matter which one gets the earlier time. Just make sure
109+
* that the same CPU will always show a monotonic clock.
110+
*
111+
* Use a read memory barrier to get the latest written
112+
* time that was recorded.
107113
*/
108-
if (unlikely(in_nmi()))
109-
goto out;
114+
smp_rmb();
115+
prev_time = READ_ONCE(trace_clock_struct.prev_time);
116+
now = sched_clock_cpu(this_cpu);
110117

111-
arch_spin_lock(&trace_clock_struct.lock);
118+
/* Make sure that now is always greater than prev_time */
119+
if ((s64)(now - prev_time) < 0)
120+
now = prev_time + 1;
112121

113122
/*
114-
* TODO: if this happens often then maybe we should reset
115-
* my_scd->clock to prev_time+1, to make sure
116-
* we start ticking with the local clock from now on?
123+
* If in an NMI context then dont risk lockups and simply return
124+
* the current time.
117125
*/
118-
if ((s64)(now - trace_clock_struct.prev_time) < 0)
119-
now = trace_clock_struct.prev_time + 1;
126+
if (unlikely(in_nmi()))
127+
goto out;
120128

121-
trace_clock_struct.prev_time = now;
129+
/* Tracing can cause strange recursion, always use a try lock */
130+
if (arch_spin_trylock(&trace_clock_struct.lock)) {
131+
/* Reread prev_time in case it was already updated */
132+
prev_time = READ_ONCE(trace_clock_struct.prev_time);
133+
if ((s64)(now - prev_time) < 0)
134+
now = prev_time + 1;
122135

123-
arch_spin_unlock(&trace_clock_struct.lock);
136+
trace_clock_struct.prev_time = now;
124137

138+
/* The unlock acts as the wmb for the above rmb */
139+
arch_spin_unlock(&trace_clock_struct.lock);
140+
}
125141
out:
126142
raw_local_irq_restore(flags);
127143

0 commit comments

Comments
 (0)