Skip to content

Commit 8e1ac42

Browse files
Quentin PerretPeter Zijlstra
authored andcommitted
sched/fair: Fix overutilized update in enqueue_task_fair()
enqueue_task_fair() attempts to skip the overutilized update for new tasks as their util_avg is not accurate yet. However, the flag we check to do so is overwritten earlier on in the function, which makes the condition pretty much a nop. Fix this by saving the flag early on. Fixes: 2802bf3 ("sched/fair: Add over-utilization/tipping point indicator") Reported-by: Rick Yiu <[email protected]> Signed-off-by: Quentin Perret <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Vincent Guittot <[email protected]> Reviewed-by: Valentin Schneider <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 8d4d9c7 commit 8e1ac42

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

kernel/sched/fair.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
54775477
struct cfs_rq *cfs_rq;
54785478
struct sched_entity *se = &p->se;
54795479
int idle_h_nr_running = task_has_idle_policy(p);
5480+
int task_new = !(flags & ENQUEUE_WAKEUP);
54805481

54815482
/*
54825483
* The code below (indirectly) updates schedutil which looks at
@@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
55495550
* into account, but that is not straightforward to implement,
55505551
* and the following generally works well enough in practice.
55515552
*/
5552-
if (flags & ENQUEUE_WAKEUP)
5553+
if (!task_new)
55535554
update_overutilized_status(rq);
55545555

55555556
enqueue_throttle:

0 commit comments

Comments
 (0)