Skip to content

Commit e708d7a

Browse files
Sebastian Andrzej SiewiorIngo Molnar
authored andcommitted
perf: Do poll_wait() before checking condition in perf_poll()
One should first enqueue to the waitqueue and then check for the condition. If the condition gets true after mutex_unlock() but before poll_wait() then we lose it and would have wait for another wakeup. This has been like this since v2.6.31-rc1 commit c7138f3 ("perf_counter: fix perf_poll()"). Before that it was slightly worse. I guess we get enough wakeups so if we miss here one it doesn't really matter. It is still a bad example. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
1 parent 36bbb2f commit e708d7a

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

kernel/events/core.c

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3629,6 +3629,7 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
36293629
struct ring_buffer *rb;
36303630
unsigned int events = POLL_HUP;
36313631

3632+
poll_wait(file, &event->waitq, wait);
36323633
/*
36333634
* Pin the event->rb by taking event->mmap_mutex; otherwise
36343635
* perf_event_set_output() can swizzle our rb and make us miss wakeups.
@@ -3638,9 +3639,6 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
36383639
if (rb)
36393640
events = atomic_xchg(&rb->poll, 0);
36403641
mutex_unlock(&event->mmap_mutex);
3641-
3642-
poll_wait(file, &event->waitq, wait);
3643-
36443642
return events;
36453643
}
36463644

0 commit comments

Comments
 (0)