Skip to content

Commit 03274a3

Browse files
committed
tracing/fgraph: Adjust fgraph depth before calling trace return callback
While debugging the virtual cputime with the function graph tracer with a max_depth of 1 (most common use of the max_depth so far), I found that I was missing kernel execution because of a race condition. The code for the return side of the function has a slight race: ftrace_pop_return_trace(&trace, &ret, frame_pointer); trace.rettime = trace_clock_local(); ftrace_graph_return(&trace); barrier(); current->curr_ret_stack--; The ftrace_pop_return_trace() initializes the trace structure for the callback. The ftrace_graph_return() uses the trace structure for its own use as that structure is on the stack and is local to this function. Then the curr_ret_stack is decremented which is what the trace.depth is set to. If an interrupt comes in after the ftrace_graph_return() but before the curr_ret_stack, then the called function will get a depth of 2. If max_depth is set to 1 this function will be ignored. The problem is that the trace has already been called, and the timestamp for that trace will not reflect the time the function was about to re-enter userspace. Calls to the interrupt will not be traced because the max_depth has prevented this. To solve this issue, the ftrace_graph_return() can safely be moved after the current->curr_ret_stack has been updated. This way the timestamp for the return callback will reflect the actual time. If an interrupt comes in after the curr_ret_stack update and ftrace_graph_return(), it will be traced. It may look a little confusing to see it within the other function, but at least it will not be lost. Cc: Frederic Weisbecker <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
1 parent 38dbe0b commit 03274a3

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

kernel/trace/trace_functions_graph.c

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -191,10 +191,16 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer)
191191

192192
ftrace_pop_return_trace(&trace, &ret, frame_pointer);
193193
trace.rettime = trace_clock_local();
194-
ftrace_graph_return(&trace);
195194
barrier();
196195
current->curr_ret_stack--;
197196

197+
/*
198+
* The trace should run after decrementing the ret counter
199+
* in case an interrupt were to come in. We don't want to
200+
* lose the interrupt if max_depth is set.
201+
*/
202+
ftrace_graph_return(&trace);
203+
198204
if (unlikely(!ret)) {
199205
ftrace_graph_stop();
200206
WARN_ON(1);

0 commit comments

Comments
 (0)