Skip to content

Commit a76a82a

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
perf/core: Fix use-after-free bug
Dmitry reported a KASAN use-after-free on event->group_leader. It turns out there's a hole in perf_remove_from_context() due to event_function_call() not calling its function when the task associated with the event is already dead. In this case the event will have been detached from the task, but the grouping will have been retained, such that group operations might still work properly while there are live child events etc. This does however mean that we can miss a perf_group_detach() call when the group decomposes, this in turn can then lead to use-after-free. Fix it by explicitly doing the group detach if its still required. Reported-by: Dmitry Vyukov <[email protected]> Tested-by: Dmitry Vyukov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] # v4.5+ Cc: syzkaller <[email protected]> Fixes: 63b6da3 ("perf: Fix perf_event_exit_task() race") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 566cf87 commit a76a82a

File tree

1 file changed

+25
-2
lines changed

1 file changed

+25
-2
lines changed

kernel/events/core.c

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1469,7 +1469,6 @@ ctx_group_list(struct perf_event *event, struct perf_event_context *ctx)
14691469
static void
14701470
list_add_event(struct perf_event *event, struct perf_event_context *ctx)
14711471
{
1472-
14731472
lockdep_assert_held(&ctx->lock);
14741473

14751474
WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT);
@@ -1624,6 +1623,8 @@ static void perf_group_attach(struct perf_event *event)
16241623
{
16251624
struct perf_event *group_leader = event->group_leader, *pos;
16261625

1626+
lockdep_assert_held(&event->ctx->lock);
1627+
16271628
/*
16281629
* We can have double attach due to group movement in perf_event_open.
16291630
*/
@@ -1697,6 +1698,8 @@ static void perf_group_detach(struct perf_event *event)
16971698
struct perf_event *sibling, *tmp;
16981699
struct list_head *list = NULL;
16991700

1701+
lockdep_assert_held(&event->ctx->lock);
1702+
17001703
/*
17011704
* We can have double detach due to exit/hot-unplug + close.
17021705
*/
@@ -1895,9 +1898,29 @@ __perf_remove_from_context(struct perf_event *event,
18951898
*/
18961899
static void perf_remove_from_context(struct perf_event *event, unsigned long flags)
18971900
{
1898-
lockdep_assert_held(&event->ctx->mutex);
1901+
struct perf_event_context *ctx = event->ctx;
1902+
1903+
lockdep_assert_held(&ctx->mutex);
18991904

19001905
event_function_call(event, __perf_remove_from_context, (void *)flags);
1906+
1907+
/*
1908+
* The above event_function_call() can NO-OP when it hits
1909+
* TASK_TOMBSTONE. In that case we must already have been detached
1910+
* from the context (by perf_event_exit_event()) but the grouping
1911+
* might still be in-tact.
1912+
*/
1913+
WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT);
1914+
if ((flags & DETACH_GROUP) &&
1915+
(event->attach_state & PERF_ATTACH_GROUP)) {
1916+
/*
1917+
* Since in that case we cannot possibly be scheduled, simply
1918+
* detach now.
1919+
*/
1920+
raw_spin_lock_irq(&ctx->lock);
1921+
perf_group_detach(event);
1922+
raw_spin_unlock_irq(&ctx->lock);
1923+
}
19011924
}
19021925

19031926
/*

0 commit comments

Comments
 (0)