Skip to content

Commit 74b7040

Browse files
gormanmDan Duval
authored andcommitted
sched/fair: Use a recently used CPU as an idle candidate and the basis for SIS
Orabug: 28088230 The select_idle_sibling() (SIS) rewrite in commit: 10e2f1a ("sched/core: Rewrite and improve select_idle_siblings()") ... replaced a domain iteration with a search that broadly speaking does a wrapped walk of the scheduler domain sharing a last-level-cache. While this had a number of improvements, one consequence is that two tasks that share a waker/wakee relationship push each other around a socket. Even though two tasks may be active, all cores are evenly used. This is great from a search perspective and spreads a load across individual cores, but it has adverse consequences for cpufreq. As each CPU has relatively low utilisation, cpufreq may decide the utilisation is too low to used a higher P-state and overall computation throughput suffers. While individual cpufreq and cpuidle drivers may compensate by artifically boosting P-state (at c0) or avoiding lower C-states (during idle), it does not help if hardware-based cpufreq (e.g. HWP) is used. This patch tracks a recently used CPU based on what CPU a task was running on when it last was a waker a CPU it was recently using when a task is a wakee. During SIS, the recently used CPU is used as a target if it's still allowed by the task and is idle. The benefit may be non-obvious so consider an example of two tasks communicating back and forth. Task A may be an application doing IO where task B is a kworker or kthread like journald. Task A may issue IO, wake B and B wakes up A on completion. With the existing scheme this may look like the following (potentially different IDs if SMT is in use but similar principal applies). A (cpu 0) wake B (wakes on cpu 1) B (cpu 1) wake A (wakes on cpu 2) A (cpu 2) wake B (wakes on cpu 3) etc. A careful reader may wonder why CPU 0 was not idle when B wakes A the first time and it's simply due to the fact that A can be rescheduled to another CPU and the pattern is that prev == target when B tries to wakeup A and the information about CPU 0 has been lost. With this patch, the pattern is more likely to be: A (cpu 0) wake B (wakes on cpu 1) B (cpu 1) wake A (wakes on cpu 0) A (cpu 0) wake B (wakes on cpu 1) etc i.e. two communicating casts are more likely to use just two cores instead of all available cores sharing a LLC. The most dramatic speedup was noticed on dbench using the XFS filesystem on UMA as clients interact heavily with workqueues in that configuration. Note that a similar speedup is not observed on ext4 as the wakeup pattern is different: 4.15.0-rc9 4.15.0-rc9 waprev-v1 biasancestor-v1 Hmean 1 287.54 ( 0.00%) 817.01 ( 184.14%) Hmean 2 1268.12 ( 0.00%) 1781.24 ( 40.46%) Hmean 4 1739.68 ( 0.00%) 1594.47 ( -8.35%) Hmean 8 2464.12 ( 0.00%) 2479.56 ( 0.63%) Hmean 64 1455.57 ( 0.00%) 1434.68 ( -1.44%) The results can be less dramatic on NUMA where automatic balancing interferes with the test. It's also known that network benchmarks running on localhost also benefit quite a bit from this patch (roughly 10% on netperf RR for UDP and TCP depending on the machine). Hackbench also seens small improvements (6-11% depending on machine and thread count). The facebook schbench was also tested but in most cases showed little or no different to wakeup latencies. Signed-off-by: Mel Gorman <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]> (cherry picked from commit 32e839d) Signed-off-by: Dan Duval <[email protected]> Reviewed-by: Chuck Anderson <[email protected]> Conflict: kernel/sched/fair.c Note: some changes were added to this submission to maintain a stable kABI.
1 parent 03153dc commit 74b7040

File tree

3 files changed

+30
-4
lines changed

3 files changed

+30
-4
lines changed

include/linux/sched.h

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1149,9 +1149,16 @@ struct task_struct {
11491149
/* Used by LSM modules for access restriction: */
11501150
void *security;
11511151
#endif
1152+
/*
1153+
* recent_used_cpu is initially set as the last CPU used by a task
1154+
* that wakes affine another task. Waker/wakee relationships can
1155+
* push tasks around a CPU where each wakeup moves to the next one.
1156+
* Tracking a recently used CPU allows a quick search for a recently
1157+
* used CPU that may be idle.
1158+
*/
1159+
UEK_KABI_USE(1, int recent_used_cpu);
11521160

11531161
/* Space for future expansion without breaking kABI. */
1154-
UEK_KABI_RESERVED(1);
11551162
UEK_KABI_RESERVED(2);
11561163
UEK_KABI_RESERVED(3);
11571164
UEK_KABI_RESERVED(4);

kernel/sched/core.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2479,6 +2479,7 @@ void wake_up_new_task(struct task_struct *p)
24792479
* Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
24802480
* as we're not fully set-up yet.
24812481
*/
2482+
p->recent_used_cpu = task_cpu(p);
24822483
__set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0));
24832484
#endif
24842485
rq = __task_rq_lock(p, &rf);

kernel/sched/fair.c

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5870,7 +5870,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
58705870
static int select_idle_sibling(struct task_struct *p, int prev, int target)
58715871
{
58725872
struct sched_domain *sd;
5873-
int i;
5873+
int i, recent_used_cpu;
58745874

58755875
if (idle_cpu(target))
58765876
return target;
@@ -5881,6 +5881,21 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
58815881
if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev))
58825882
return prev;
58835883

5884+
/* Check a recently used CPU as a potential idle candidate */
5885+
recent_used_cpu = p->recent_used_cpu;
5886+
if (recent_used_cpu != prev &&
5887+
recent_used_cpu != target &&
5888+
cpus_share_cache(recent_used_cpu, target) &&
5889+
idle_cpu(recent_used_cpu) &&
5890+
cpumask_test_cpu(p->recent_used_cpu, &p->cpus_allowed)) {
5891+
/*
5892+
* Replace recent_used_cpu with prev as it is a potential
5893+
* candidate for the next wake.
5894+
*/
5895+
p->recent_used_cpu = prev;
5896+
return recent_used_cpu;
5897+
}
5898+
58845899
sd = rcu_dereference(per_cpu(sd_llc, target));
58855900
if (!sd)
58865901
return target;
@@ -6041,10 +6056,13 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
60416056
}
60426057

60436058
if (!sd) {
6044-
pick_cpu:
6045-
if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */
6059+
pick_cpu:
6060+
if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
60466061
new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
60476062

6063+
if (want_affine)
6064+
current->recent_used_cpu = cpu;
6065+
}
60486066
} else {
60496067
new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
60506068
}

0 commit comments

Comments
 (0)