Skip to content

Commit 7e67a85

Browse files
committed
Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar: - MAINTAINERS: Add Mark Rutland as perf submaintainer, Juri Lelli and Vincent Guittot as scheduler submaintainers. Add Dietmar Eggemann, Steven Rostedt, Ben Segall and Mel Gorman as scheduler reviewers. As perf and the scheduler is getting bigger and more complex, document the status quo of current responsibilities and interests, and spread the review pain^H^H^H^H fun via an increase in the Cc: linecount generated by scripts/get_maintainer.pl. :-) - Add another series of patches that brings the -rt (PREEMPT_RT) tree closer to mainline: split the monolithic CONFIG_PREEMPT dependencies into a new CONFIG_PREEMPTION category that will allow the eventual introduction of CONFIG_PREEMPT_RT. Still a few more hundred patches to go though. - Extend the CPU cgroup controller with uclamp.min and uclamp.max to allow the finer shaping of CPU bandwidth usage. - Micro-optimize energy-aware wake-ups from O(CPUS^2) to O(CPUS). - Improve the behavior of high CPU count, high thread count applications running under cpu.cfs_quota_us constraints. - Improve balancing with SCHED_IDLE (SCHED_BATCH) tasks present. - Improve CPU isolation housekeeping CPU allocation NUMA locality. - Fix deadline scheduler bandwidth calculations and logic when cpusets rebuilds the topology, or when it gets deadline-throttled while it's being offlined. - Convert the cpuset_mutex to percpu_rwsem, to allow it to be used from setscheduler() system calls without creating global serialization. Add new synchronization between cpuset topology-changing events and the deadline acceptance tests in setscheduler(), which were broken before. - Rework the active_mm state machine to be less confusing and more optimal. - Rework (simplify) the pick_next_task() slowpath. - Improve load-balancing on AMD EPYC systems. - ... and misc cleanups, smaller fixes and improvements - please see the Git log for more details. * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (53 commits) sched/psi: Correct overly pessimistic size calculation sched/fair: Speed-up energy-aware wake-ups sched/uclamp: Always use 'enum uclamp_id' for clamp_id values sched/uclamp: Update CPU's refcount on TG's clamp changes sched/uclamp: Use TG's clamps to restrict TASK's clamps sched/uclamp: Propagate system defaults to the root group sched/uclamp: Propagate parent clamps sched/uclamp: Extend CPU's cgroup controller sched/topology: Improve load balancing on AMD EPYC systems arch, ia64: Make NUMA select SMP sched, perf: MAINTAINERS update, add submaintainers and reviewers sched/fair: Use rq_lock/unlock in online_fair_sched_group cpufreq: schedutil: fix equation in comment sched: Rework pick_next_task() slow-path sched: Allow put_prev_task() to drop rq->lock sched/fair: Expose newidle_balance() sched: Add task_struct pointer to sched_class::set_curr_task sched: Rework CPU hotplug task selection sched/{rt,deadline}: Fix set_next_task vs pick_next_task sched: Fix kerneldoc comment for ia64_set_curr_task ...
2 parents 772c1d0 + 563c4f8 commit 7e67a85

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+1274
-595
lines changed

Documentation/admin-guide/cgroup-v2.rst

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -951,6 +951,13 @@ controller implements weight and absolute bandwidth limit models for
951951
normal scheduling policy and absolute bandwidth allocation model for
952952
realtime scheduling policy.
953953

954+
In all the above models, cycles distribution is defined only on a temporal
955+
base and it does not account for the frequency at which tasks are executed.
956+
The (optional) utilization clamping support allows to hint the schedutil
957+
cpufreq governor about the minimum desired frequency which should always be
958+
provided by a CPU, as well as the maximum desired frequency, which should not
959+
be exceeded by a CPU.
960+
954961
WARNING: cgroup2 doesn't yet support control of realtime processes and
955962
the cpu controller can only be enabled when all RT processes are in
956963
the root cgroup. Be aware that system management software may already
@@ -1016,6 +1023,33 @@ All time durations are in microseconds.
10161023
Shows pressure stall information for CPU. See
10171024
Documentation/accounting/psi.rst for details.
10181025

1026+
cpu.uclamp.min
1027+
A read-write single value file which exists on non-root cgroups.
1028+
The default is "0", i.e. no utilization boosting.
1029+
1030+
The requested minimum utilization (protection) as a percentage
1031+
rational number, e.g. 12.34 for 12.34%.
1032+
1033+
This interface allows reading and setting minimum utilization clamp
1034+
values similar to the sched_setattr(2). This minimum utilization
1035+
value is used to clamp the task specific minimum utilization clamp.
1036+
1037+
The requested minimum utilization (protection) is always capped by
1038+
the current value for the maximum utilization (limit), i.e.
1039+
`cpu.uclamp.max`.
1040+
1041+
cpu.uclamp.max
1042+
A read-write single value file which exists on non-root cgroups.
1043+
The default is "max". i.e. no utilization capping
1044+
1045+
The requested maximum utilization (limit) as a percentage rational
1046+
number, e.g. 98.76 for 98.76%.
1047+
1048+
This interface allows reading and setting maximum utilization clamp
1049+
values similar to the sched_setattr(2). This maximum utilization
1050+
value is used to clamp the task specific maximum utilization clamp.
1051+
1052+
10191053

10201054
Memory
10211055
------

Documentation/scheduler/sched-bwc.rst

Lines changed: 60 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,16 @@ CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the
99
specification of the maximum CPU bandwidth available to a group or hierarchy.
1010

1111
The bandwidth allowed for a group is specified using a quota and period. Within
12-
each given "period" (microseconds), a group is allowed to consume only up to
13-
"quota" microseconds of CPU time. When the CPU bandwidth consumption of a
14-
group exceeds this limit (for that period), the tasks belonging to its
15-
hierarchy will be throttled and are not allowed to run again until the next
16-
period.
17-
18-
A group's unused runtime is globally tracked, being refreshed with quota units
19-
above at each period boundary. As threads consume this bandwidth it is
20-
transferred to cpu-local "silos" on a demand basis. The amount transferred
12+
each given "period" (microseconds), a task group is allocated up to "quota"
13+
microseconds of CPU time. That quota is assigned to per-cpu run queues in
14+
slices as threads in the cgroup become runnable. Once all quota has been
15+
assigned any additional requests for quota will result in those threads being
16+
throttled. Throttled threads will not be able to run again until the next
17+
period when the quota is replenished.
18+
19+
A group's unassigned quota is globally tracked, being refreshed back to
20+
cfs_quota units at each period boundary. As threads consume this bandwidth it
21+
is transferred to cpu-local "silos" on a demand basis. The amount transferred
2122
within each of these updates is tunable and described as the "slice".
2223

2324
Management
@@ -35,12 +36,12 @@ The default values are::
3536

3637
A value of -1 for cpu.cfs_quota_us indicates that the group does not have any
3738
bandwidth restriction in place, such a group is described as an unconstrained
38-
bandwidth group. This represents the traditional work-conserving behavior for
39+
bandwidth group. This represents the traditional work-conserving behavior for
3940
CFS.
4041

4142
Writing any (valid) positive value(s) will enact the specified bandwidth limit.
42-
The minimum quota allowed for the quota or period is 1ms. There is also an
43-
upper bound on the period length of 1s. Additional restrictions exist when
43+
The minimum quota allowed for the quota or period is 1ms. There is also an
44+
upper bound on the period length of 1s. Additional restrictions exist when
4445
bandwidth limits are used in a hierarchical fashion, these are explained in
4546
more detail below.
4647

@@ -53,8 +54,8 @@ unthrottled if it is in a constrained state.
5354
System wide settings
5455
--------------------
5556
For efficiency run-time is transferred between the global pool and CPU local
56-
"silos" in a batch fashion. This greatly reduces global accounting pressure
57-
on large systems. The amount transferred each time such an update is required
57+
"silos" in a batch fashion. This greatly reduces global accounting pressure
58+
on large systems. The amount transferred each time such an update is required
5859
is described as the "slice".
5960

6061
This is tunable via procfs::
@@ -97,6 +98,51 @@ There are two ways in which a group may become throttled:
9798
In case b) above, even though the child may have runtime remaining it will not
9899
be allowed to until the parent's runtime is refreshed.
99100

101+
CFS Bandwidth Quota Caveats
102+
---------------------------
103+
Once a slice is assigned to a cpu it does not expire. However all but 1ms of
104+
the slice may be returned to the global pool if all threads on that cpu become
105+
unrunnable. This is configured at compile time by the min_cfs_rq_runtime
106+
variable. This is a performance tweak that helps prevent added contention on
107+
the global lock.
108+
109+
The fact that cpu-local slices do not expire results in some interesting corner
110+
cases that should be understood.
111+
112+
For cgroup cpu constrained applications that are cpu limited this is a
113+
relatively moot point because they will naturally consume the entirety of their
114+
quota as well as the entirety of each cpu-local slice in each period. As a
115+
result it is expected that nr_periods roughly equal nr_throttled, and that
116+
cpuacct.usage will increase roughly equal to cfs_quota_us in each period.
117+
118+
For highly-threaded, non-cpu bound applications this non-expiration nuance
119+
allows applications to briefly burst past their quota limits by the amount of
120+
unused slice on each cpu that the task group is running on (typically at most
121+
1ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only
122+
applies if quota had been assigned to a cpu and then not fully used or returned
123+
in previous periods. This burst amount will not be transferred between cores.
124+
As a result, this mechanism still strictly limits the task group to quota
125+
average usage, albeit over a longer time window than a single period. This
126+
also limits the burst ability to no more than 1ms per cpu. This provides
127+
better more predictable user experience for highly threaded applications with
128+
small quota limits on high core count machines. It also eliminates the
129+
propensity to throttle these applications while simultanously using less than
130+
quota amounts of cpu. Another way to say this, is that by allowing the unused
131+
portion of a slice to remain valid across periods we have decreased the
132+
possibility of wastefully expiring quota on cpu-local silos that don't need a
133+
full slice's amount of cpu time.
134+
135+
The interaction between cpu-bound and non-cpu-bound-interactive applications
136+
should also be considered, especially when single core usage hits 100%. If you
137+
gave each of these applications half of a cpu-core and they both got scheduled
138+
on the same CPU it is theoretically possible that the non-cpu bound application
139+
will use up to 1ms additional quota in some periods, thereby preventing the
140+
cpu-bound application from fully using its quota by that same amount. In these
141+
instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to
142+
decide which application is chosen to run, as they will both be runnable and
143+
have remaining quota. This runtime discrepancy will be made up in the following
144+
periods when the interactive application idles.
145+
100146
Examples
101147
--------
102148
1. Limit a group to 1 CPU worth of runtime::

MAINTAINERS

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12578,6 +12578,7 @@ PERFORMANCE EVENTS SUBSYSTEM
1257812578
M: Peter Zijlstra <[email protected]>
1257912579
M: Ingo Molnar <[email protected]>
1258012580
M: Arnaldo Carvalho de Melo <[email protected]>
12581+
R: Mark Rutland <[email protected]>
1258112582
R: Alexander Shishkin <[email protected]>
1258212583
R: Jiri Olsa <[email protected]>
1258312584
R: Namhyung Kim <[email protected]>
@@ -14175,6 +14176,12 @@ F: drivers/watchdog/sc1200wdt.c
1417514176
SCHEDULER
1417614177
M: Ingo Molnar <[email protected]>
1417714178
M: Peter Zijlstra <[email protected]>
14179+
M: Juri Lelli <[email protected]> (SCHED_DEADLINE)
14180+
M: Vincent Guittot <[email protected]> (SCHED_NORMAL)
14181+
R: Dietmar Eggemann <[email protected]> (SCHED_NORMAL)
14182+
R: Steven Rostedt <[email protected]> (SCHED_FIFO/SCHED_RR)
14183+
R: Ben Segall <[email protected]> (CONFIG_CFS_BANDWIDTH)
14184+
R: Mel Gorman <[email protected]> (CONFIG_NUMA_BALANCING)
1417814185
1417914186
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
1418014187
S: Maintained

arch/Kconfig

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ config STATIC_KEYS_SELFTEST
106106
config OPTPROBES
107107
def_bool y
108108
depends on KPROBES && HAVE_OPTPROBES
109-
select TASKS_RCU if PREEMPT
109+
select TASKS_RCU if PREEMPTION
110110

111111
config KPROBES_ON_FTRACE
112112
def_bool y

arch/ia64/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -311,6 +311,7 @@ config ARCH_DISCONTIGMEM_DEFAULT
311311
config NUMA
312312
bool "NUMA support"
313313
depends on !FLATMEM
314+
select SMP
314315
help
315316
Say Y to compile the kernel to support NUMA (Non-Uniform Memory
316317
Access). This option is for configuring high-end multiprocessor

arch/x86/entry/entry_32.S

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@
6363
* enough to patch inline, increasing performance.
6464
*/
6565

66-
#ifdef CONFIG_PREEMPT
66+
#ifdef CONFIG_PREEMPTION
6767
# define preempt_stop(clobbers) DISABLE_INTERRUPTS(clobbers); TRACE_IRQS_OFF
6868
#else
6969
# define preempt_stop(clobbers)
@@ -1084,7 +1084,7 @@ restore_all:
10841084
INTERRUPT_RETURN
10851085

10861086
restore_all_kernel:
1087-
#ifdef CONFIG_PREEMPT
1087+
#ifdef CONFIG_PREEMPTION
10881088
DISABLE_INTERRUPTS(CLBR_ANY)
10891089
cmpl $0, PER_CPU_VAR(__preempt_count)
10901090
jnz .Lno_preempt
@@ -1364,7 +1364,7 @@ ENTRY(xen_hypervisor_callback)
13641364
ENTRY(xen_do_upcall)
13651365
1: mov %esp, %eax
13661366
call xen_evtchn_do_upcall
1367-
#ifndef CONFIG_PREEMPT
1367+
#ifndef CONFIG_PREEMPTION
13681368
call xen_maybe_preempt_hcall
13691369
#endif
13701370
jmp ret_from_intr

arch/x86/entry/entry_64.S

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -664,7 +664,7 @@ GLOBAL(swapgs_restore_regs_and_return_to_usermode)
664664

665665
/* Returning to kernel space */
666666
retint_kernel:
667-
#ifdef CONFIG_PREEMPT
667+
#ifdef CONFIG_PREEMPTION
668668
/* Interrupts are off */
669669
/* Check if we need preemption */
670670
btl $9, EFLAGS(%rsp) /* were interrupts off? */
@@ -1115,7 +1115,7 @@ ENTRY(xen_do_hypervisor_callback) /* do_hypervisor_callback(struct *pt_regs) */
11151115
call xen_evtchn_do_upcall
11161116
LEAVE_IRQ_STACK
11171117

1118-
#ifndef CONFIG_PREEMPT
1118+
#ifndef CONFIG_PREEMPTION
11191119
call xen_maybe_preempt_hcall
11201120
#endif
11211121
jmp error_exit

arch/x86/entry/thunk_32.S

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
THUNK trace_hardirqs_off_thunk,trace_hardirqs_off_caller,1
3535
#endif
3636

37-
#ifdef CONFIG_PREEMPT
37+
#ifdef CONFIG_PREEMPTION
3838
THUNK ___preempt_schedule, preempt_schedule
3939
THUNK ___preempt_schedule_notrace, preempt_schedule_notrace
4040
EXPORT_SYMBOL(___preempt_schedule)

arch/x86/entry/thunk_64.S

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
THUNK lockdep_sys_exit_thunk,lockdep_sys_exit
4747
#endif
4848

49-
#ifdef CONFIG_PREEMPT
49+
#ifdef CONFIG_PREEMPTION
5050
THUNK ___preempt_schedule, preempt_schedule
5151
THUNK ___preempt_schedule_notrace, preempt_schedule_notrace
5252
EXPORT_SYMBOL(___preempt_schedule)
@@ -55,7 +55,7 @@
5555

5656
#if defined(CONFIG_TRACE_IRQFLAGS) \
5757
|| defined(CONFIG_DEBUG_LOCK_ALLOC) \
58-
|| defined(CONFIG_PREEMPT)
58+
|| defined(CONFIG_PREEMPTION)
5959
.L_restore:
6060
popq %r11
6161
popq %r10

arch/x86/include/asm/preempt.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ static __always_inline bool should_resched(int preempt_offset)
102102
return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset);
103103
}
104104

105-
#ifdef CONFIG_PREEMPT
105+
#ifdef CONFIG_PREEMPTION
106106
extern asmlinkage void ___preempt_schedule(void);
107107
# define __preempt_schedule() \
108108
asm volatile ("call ___preempt_schedule" : ASM_CALL_CONSTRAINT)

arch/x86/kernel/cpu/amd.c

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
#include <linux/sched.h>
99
#include <linux/sched/clock.h>
1010
#include <linux/random.h>
11+
#include <linux/topology.h>
1112
#include <asm/processor.h>
1213
#include <asm/apic.h>
1314
#include <asm/cacheinfo.h>
@@ -889,6 +890,10 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
889890
{
890891
set_cpu_cap(c, X86_FEATURE_ZEN);
891892

893+
#ifdef CONFIG_NUMA
894+
node_reclaim_distance = 32;
895+
#endif
896+
892897
/*
893898
* Fix erratum 1076: CPB feature bit not being set in CPUID.
894899
* Always set it, except when running under a hypervisor.

arch/x86/kernel/dumpstack.c

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -367,13 +367,18 @@ NOKPROBE_SYMBOL(oops_end);
367367

368368
int __die(const char *str, struct pt_regs *regs, long err)
369369
{
370+
const char *pr = "";
371+
370372
/* Save the regs of the first oops for the executive summary later. */
371373
if (!die_counter)
372374
exec_summary_regs = *regs;
373375

376+
if (IS_ENABLED(CONFIG_PREEMPTION))
377+
pr = IS_ENABLED(CONFIG_PREEMPT_RT) ? " PREEMPT_RT" : " PREEMPT";
378+
374379
printk(KERN_DEFAULT
375380
"%s: %04lx [#%d]%s%s%s%s%s\n", str, err & 0xffff, ++die_counter,
376-
IS_ENABLED(CONFIG_PREEMPT) ? " PREEMPT" : "",
381+
pr,
377382
IS_ENABLED(CONFIG_SMP) ? " SMP" : "",
378383
debug_pagealloc_enabled() ? " DEBUG_PAGEALLOC" : "",
379384
IS_ENABLED(CONFIG_KASAN) ? " KASAN" : "",

arch/x86/kernel/kprobes/core.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -580,7 +580,7 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
580580
if (setup_detour_execution(p, regs, reenter))
581581
return;
582582

583-
#if !defined(CONFIG_PREEMPT)
583+
#if !defined(CONFIG_PREEMPTION)
584584
if (p->ainsn.boostable && !p->post_handler) {
585585
/* Boost up -- we can execute copied instructions directly */
586586
if (!reenter)

arch/x86/kernel/kvm.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -311,7 +311,7 @@ static void kvm_guest_cpu_init(void)
311311
if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) && kvmapf) {
312312
u64 pa = slow_virt_to_phys(this_cpu_ptr(&apf_reason));
313313

314-
#ifdef CONFIG_PREEMPT
314+
#ifdef CONFIG_PREEMPTION
315315
pa |= KVM_ASYNC_PF_SEND_ALWAYS;
316316
#endif
317317
pa |= KVM_ASYNC_PF_ENABLED;

include/asm-generic/preempt.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,11 +78,11 @@ static __always_inline bool should_resched(int preempt_offset)
7878
tif_need_resched());
7979
}
8080

81-
#ifdef CONFIG_PREEMPT
81+
#ifdef CONFIG_PREEMPTION
8282
extern asmlinkage void preempt_schedule(void);
8383
#define __preempt_schedule() preempt_schedule()
8484
extern asmlinkage void preempt_schedule_notrace(void);
8585
#define __preempt_schedule_notrace() preempt_schedule_notrace()
86-
#endif /* CONFIG_PREEMPT */
86+
#endif /* CONFIG_PREEMPTION */
8787

8888
#endif /* __ASM_PREEMPT_H */

include/linux/cgroup.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -150,6 +150,7 @@ struct task_struct *cgroup_taskset_first(struct cgroup_taskset *tset,
150150
struct task_struct *cgroup_taskset_next(struct cgroup_taskset *tset,
151151
struct cgroup_subsys_state **dst_cssp);
152152

153+
void cgroup_enable_task_cg_lists(void);
153154
void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int flags,
154155
struct css_task_iter *it);
155156
struct task_struct *css_task_iter_next(struct css_task_iter *it);

include/linux/cpuset.h

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -40,21 +40,23 @@ static inline bool cpusets_enabled(void)
4040

4141
static inline void cpuset_inc(void)
4242
{
43-
static_branch_inc(&cpusets_pre_enable_key);
44-
static_branch_inc(&cpusets_enabled_key);
43+
static_branch_inc_cpuslocked(&cpusets_pre_enable_key);
44+
static_branch_inc_cpuslocked(&cpusets_enabled_key);
4545
}
4646

4747
static inline void cpuset_dec(void)
4848
{
49-
static_branch_dec(&cpusets_enabled_key);
50-
static_branch_dec(&cpusets_pre_enable_key);
49+
static_branch_dec_cpuslocked(&cpusets_enabled_key);
50+
static_branch_dec_cpuslocked(&cpusets_pre_enable_key);
5151
}
5252

5353
extern int cpuset_init(void);
5454
extern void cpuset_init_smp(void);
5555
extern void cpuset_force_rebuild(void);
5656
extern void cpuset_update_active_cpus(void);
5757
extern void cpuset_wait_for_hotplug(void);
58+
extern void cpuset_read_lock(void);
59+
extern void cpuset_read_unlock(void);
5860
extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
5961
extern void cpuset_cpus_allowed_fallback(struct task_struct *p);
6062
extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
@@ -176,6 +178,9 @@ static inline void cpuset_update_active_cpus(void)
176178

177179
static inline void cpuset_wait_for_hotplug(void) { }
178180

181+
static inline void cpuset_read_lock(void) { }
182+
static inline void cpuset_read_unlock(void) { }
183+
179184
static inline void cpuset_cpus_allowed(struct task_struct *p,
180185
struct cpumask *mask)
181186
{

0 commit comments

Comments
 (0)