Skip to content

Commit 1bd77f2

Browse files
Christoph LameterLinus Torvalds
authored andcommitted
[PATCH] sched: call tasklet less frequently
Trigger softirq less frequently We trigger the softirq before this patch using offset of sd->interval. However, if the queue is busy then it is sufficient to schedule the softirq with sd->interval * busy_factor. So we modify the calculation of the next time to balance by taking the interval added to last_balance again. This is only the right value if the idle/busy situation continues as is. There are two potential trouble spots: - If the queue was idle and now gets busy then we call rebalance early. However, that is not a problem because we will then use the longer interval for the next period. - If the queue was busy and becomes idle then we potentially wait too long before rebalancing. However, when the task goes idle then idle_balance is called. We add another calculation of the next balance time based on sd->interval in idle_balance so that we will rebalance soon. V2->V3: - Calculate rebalance time based on current jiffies and not based on the jiffies at the last time we load balanced. We no longer rely on staggering and therefore we can affort to do this now. V3->V4: - Use functions to do jiffy comparisons. Signed-off-by: Christoph Lameter <[email protected]> Cc: Peter Williams <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: "Siddha, Suresh B" <[email protected]> Cc: "Chen, Kenneth W" <[email protected]> Acked-by: Ingo Molnar <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent c9819f4 commit 1bd77f2

File tree

1 file changed

+16
-2
lines changed

1 file changed

+16
-2
lines changed

kernel/sched.c

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2774,14 +2774,28 @@ load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd)
27742774
static void idle_balance(int this_cpu, struct rq *this_rq)
27752775
{
27762776
struct sched_domain *sd;
2777+
int pulled_task = 0;
2778+
unsigned long next_balance = jiffies + 60 * HZ;
27772779

27782780
for_each_domain(this_cpu, sd) {
27792781
if (sd->flags & SD_BALANCE_NEWIDLE) {
27802782
/* If we've pulled tasks over stop searching: */
2781-
if (load_balance_newidle(this_cpu, this_rq, sd))
2783+
pulled_task = load_balance_newidle(this_cpu,
2784+
this_rq, sd);
2785+
if (time_after(next_balance,
2786+
sd->last_balance + sd->balance_interval))
2787+
next_balance = sd->last_balance
2788+
+ sd->balance_interval;
2789+
if (pulled_task)
27822790
break;
27832791
}
27842792
}
2793+
if (!pulled_task)
2794+
/*
2795+
* We are going idle. next_balance may be set based on
2796+
* a busy processor. So reset next_balance.
2797+
*/
2798+
this_rq->next_balance = next_balance;
27852799
}
27862800

27872801
/*
@@ -2904,7 +2918,7 @@ static void run_rebalance_domains(struct softirq_action *h)
29042918
*/
29052919
idle = NOT_IDLE;
29062920
}
2907-
sd->last_balance += interval;
2921+
sd->last_balance = jiffies;
29082922
}
29092923
if (time_after(next_balance, sd->last_balance + interval))
29102924
next_balance = sd->last_balance + interval;

0 commit comments

Comments
 (0)