Skip to content

Commit 32e839d

Browse files
gormanmIngo Molnar
authored andcommitted
sched/fair: Use a recently used CPU as an idle candidate and the basis for SIS
The select_idle_sibling() (SIS) rewrite in commit: 10e2f1a ("sched/core: Rewrite and improve select_idle_siblings()") ... replaced a domain iteration with a search that broadly speaking does a wrapped walk of the scheduler domain sharing a last-level-cache. While this had a number of improvements, one consequence is that two tasks that share a waker/wakee relationship push each other around a socket. Even though two tasks may be active, all cores are evenly used. This is great from a search perspective and spreads a load across individual cores, but it has adverse consequences for cpufreq. As each CPU has relatively low utilisation, cpufreq may decide the utilisation is too low to used a higher P-state and overall computation throughput suffers. While individual cpufreq and cpuidle drivers may compensate by artifically boosting P-state (at c0) or avoiding lower C-states (during idle), it does not help if hardware-based cpufreq (e.g. HWP) is used. This patch tracks a recently used CPU based on what CPU a task was running on when it last was a waker a CPU it was recently using when a task is a wakee. During SIS, the recently used CPU is used as a target if it's still allowed by the task and is idle. The benefit may be non-obvious so consider an example of two tasks communicating back and forth. Task A may be an application doing IO where task B is a kworker or kthread like journald. Task A may issue IO, wake B and B wakes up A on completion. With the existing scheme this may look like the following (potentially different IDs if SMT is in use but similar principal applies). A (cpu 0) wake B (wakes on cpu 1) B (cpu 1) wake A (wakes on cpu 2) A (cpu 2) wake B (wakes on cpu 3) etc. A careful reader may wonder why CPU 0 was not idle when B wakes A the first time and it's simply due to the fact that A can be rescheduled to another CPU and the pattern is that prev == target when B tries to wakeup A and the information about CPU 0 has been lost. With this patch, the pattern is more likely to be: A (cpu 0) wake B (wakes on cpu 1) B (cpu 1) wake A (wakes on cpu 0) A (cpu 0) wake B (wakes on cpu 1) etc i.e. two communicating casts are more likely to use just two cores instead of all available cores sharing a LLC. The most dramatic speedup was noticed on dbench using the XFS filesystem on UMA as clients interact heavily with workqueues in that configuration. Note that a similar speedup is not observed on ext4 as the wakeup pattern is different: 4.15.0-rc9 4.15.0-rc9 waprev-v1 biasancestor-v1 Hmean 1 287.54 ( 0.00%) 817.01 ( 184.14%) Hmean 2 1268.12 ( 0.00%) 1781.24 ( 40.46%) Hmean 4 1739.68 ( 0.00%) 1594.47 ( -8.35%) Hmean 8 2464.12 ( 0.00%) 2479.56 ( 0.63%) Hmean 64 1455.57 ( 0.00%) 1434.68 ( -1.44%) The results can be less dramatic on NUMA where automatic balancing interferes with the test. It's also known that network benchmarks running on localhost also benefit quite a bit from this patch (roughly 10% on netperf RR for UDP and TCP depending on the machine). Hackbench also seens small improvements (6-11% depending on machine and thread count). The facebook schbench was also tested but in most cases showed little or no different to wakeup latencies. Signed-off-by: Mel Gorman <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 806486c commit 32e839d

File tree

3 files changed

+29
-2
lines changed

3 files changed

+29
-2
lines changed

include/linux/sched.h

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -555,6 +555,14 @@ struct task_struct {
555555
unsigned long wakee_flip_decay_ts;
556556
struct task_struct *last_wakee;
557557

558+
/*
559+
* recent_used_cpu is initially set as the last CPU used by a task
560+
* that wakes affine another task. Waker/wakee relationships can
561+
* push tasks around a CPU where each wakeup moves to the next one.
562+
* Tracking a recently used CPU allows a quick search for a recently
563+
* used CPU that may be idle.
564+
*/
565+
int recent_used_cpu;
558566
int wake_cpu;
559567
#endif
560568
int on_rq;

kernel/sched/core.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2461,6 +2461,7 @@ void wake_up_new_task(struct task_struct *p)
24612461
* Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
24622462
* as we're not fully set-up yet.
24632463
*/
2464+
p->recent_used_cpu = task_cpu(p);
24642465
__set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0));
24652466
#endif
24662467
rq = __task_rq_lock(p, &rf);

kernel/sched/fair.c

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6197,7 +6197,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
61976197
static int select_idle_sibling(struct task_struct *p, int prev, int target)
61986198
{
61996199
struct sched_domain *sd;
6200-
int i;
6200+
int i, recent_used_cpu;
62016201

62026202
if (idle_cpu(target))
62036203
return target;
@@ -6208,6 +6208,21 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
62086208
if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev))
62096209
return prev;
62106210

6211+
/* Check a recently used CPU as a potential idle candidate */
6212+
recent_used_cpu = p->recent_used_cpu;
6213+
if (recent_used_cpu != prev &&
6214+
recent_used_cpu != target &&
6215+
cpus_share_cache(recent_used_cpu, target) &&
6216+
idle_cpu(recent_used_cpu) &&
6217+
cpumask_test_cpu(p->recent_used_cpu, &p->cpus_allowed)) {
6218+
/*
6219+
* Replace recent_used_cpu with prev as it is a potential
6220+
* candidate for the next wake.
6221+
*/
6222+
p->recent_used_cpu = prev;
6223+
return recent_used_cpu;
6224+
}
6225+
62116226
sd = rcu_dereference(per_cpu(sd_llc, target));
62126227
if (!sd)
62136228
return target;
@@ -6375,9 +6390,12 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
63756390

63766391
if (!sd) {
63776392
pick_cpu:
6378-
if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */
6393+
if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
63796394
new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
63806395

6396+
if (want_affine)
6397+
current->recent_used_cpu = cpu;
6398+
}
63816399
} else {
63826400
new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
63836401
}

0 commit comments

Comments
 (0)