We use task_util in find_idlest_group via capacity_spare_wake. This task_util is updated in wake_cap. However wake_cap is not the only reason for ending up in find_idlest_group - we could have been sent there by wake_wide. So explicitly sync the task util with prev_cpu when we are about to head to find_idlest_group.
We could simply do this at the beginning of select_task_rq_fair (i.e. irrespective of whether we're heading to select_idle_sibling or find_idlest_group & co), but I didn't want to slow down the select_idle_sibling path more than necessary. Don't do this during fork balancing, we won't need the task_util and we'd just clobber the last_update_time, which is supposed to be 0. Signed-off-by: Brendan Jackman <brendan.jack...@arm.com> Cc: Dietmar Eggemann <dietmar.eggem...@arm.com> Cc: Vincent Guittot <vincent.guit...@linaro.org> Cc: Josef Bacik <jo...@toxicpanda.com> Cc: Ingo Molnar <mi...@redhat.com> Cc: Morten Rasmussen <morten.rasmus...@arm.com> Cc: Peter Zijlstra <pet...@infradead.org> --- kernel/sched/fair.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c95880e216f6..62869ff252b4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5913,6 +5913,14 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f new_cpu = cpu; } + if (sd && !(sd_flag & SD_BALANCE_FORK)) + /* + * We're going to need the task's util for capacity_spare_wake + * in select_idlest_group. Sync it up to prev_cpu's + * last_update_time. + */ + sync_entity_load_avg(&p->se); + if (!sd) { pick_cpu: if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */ -- 2.13.0