Hi Patrick,

On Tue, Jan 23, 2018 at 06:08:46PM +0000, Patrick Bellasi wrote:
>  static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
>  {
> -     unsigned long util, capacity;
> +     long util, util_est;
>  
>       /* Task has no contribution or is new */
>       if (cpu != task_cpu(p) || !p->se.avg.last_update_time)
> -             return cpu_util(cpu);
> +             return cpu_util_est(cpu);
>  
> -     capacity = capacity_orig_of(cpu);
> -     util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0);
> +     /* Discount task's blocked util from CPU's util */
> +     util = cpu_util(cpu) - task_util(p);
> +     util = max(util, 0L);
>  
> -     return (util >= capacity) ? capacity : util;
> +     if (!sched_feat(UTIL_EST))
> +             return util;

At first, It is not clear to me why you are not clamping the capacity to
CPU original capacity. It looks like it is not needed any more with
commit f453ae2200b0 ("sched/fair: Consider RT/IRQ pressure in
capacity_spare_wake()") inclusion. May be a separate patch to remove
the clamping part?

Thanks,
Pavan
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.

Reply via email to