Skip to content

Commit 3f005e7

Browse files
mrutland-armIngo Molnar
authored andcommitted
perf/core: Sched out groups atomically
Groups of events are supposed to be scheduled atomically, such that it is possible to derive meaningful ratios between their values. We take great pains to achieve this when scheduling event groups to a PMU in group_sched_in(), calling {start,commit}_txn() (which fall back to perf_pmu_{disable,enable}() if necessary) to provide this guarantee. However we don't mirror this in group_sched_out(), and in some cases events will not be scheduled out atomically. For example, if we disable an event group with PERF_EVENT_IOC_DISABLE, we'll cross-call __perf_event_disable() for the group leader, and will call group_sched_out() without having first disabled the relevant PMU. We will disable/enable the PMU around each pmu->del() call, but between each call the PMU will be enabled and events may count. Avoid this by explicitly disabling and enabling the PMU around event removal in group_sched_out(), mirroring what we do in group_sched_in(). Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent db4a835 commit 3f005e7

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

kernel/events/core.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1796,6 +1796,8 @@ group_sched_out(struct perf_event *group_event,
17961796
struct perf_event *event;
17971797
int state = group_event->state;
17981798

1799+
perf_pmu_disable(ctx->pmu);
1800+
17991801
event_sched_out(group_event, cpuctx, ctx);
18001802

18011803
/*
@@ -1804,6 +1806,8 @@ group_sched_out(struct perf_event *group_event,
18041806
list_for_each_entry(event, &group_event->sibling_list, group_entry)
18051807
event_sched_out(event, cpuctx, ctx);
18061808

1809+
perf_pmu_enable(ctx->pmu);
1810+
18071811
if (state == PERF_EVENT_STATE_ACTIVE && group_event->attr.exclusive)
18081812
cpuctx->exclusive = 0;
18091813
}

0 commit comments

Comments
 (0)