Hi Quentin,

On 12/11/20 11:12, Quentin Perret wrote:
> enqueue_task_fair() attempts to skip the overutilized update for new
> tasks as their util_avg is not accurate yet. However, the flag we check
> to do so is overwritten earlier on in the function, which makes the
> condition pretty much a nop.
>
> Fix this by saving the flag early on.
>
> Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point
> indicator")
> Reported-by: Rick Yiu <rick...@google.com>
> Signed-off-by: Quentin Perret <qper...@google.com>

Reviewed-by: Valentin Schneider <valentin.schnei...@arm.com>

Alternatively: how much does not updating the overutilized status here help
us? The next tick will unconditionally update it, which for arm64 is
anywhere in the next ]0, 4]ms. That "fake" fork-time util_avg should already
be accounted in the rq util_avg, and even if the new task was running the
entire time, 4ms doesn't buy us much decay.

> ---
>  kernel/sched/fair.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..f3ee60b92718 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, 
> int flags)
>       struct cfs_rq *cfs_rq;
>       struct sched_entity *se = &p->se;
>       int idle_h_nr_running = task_has_idle_policy(p);
> +     int task_new = !(flags & ENQUEUE_WAKEUP);
>
>       /*
>        * The code below (indirectly) updates schedutil which looks at
> @@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, 
> int flags)
>        * into account, but that is not straightforward to implement,
>        * and the following generally works well enough in practice.
>        */
> -     if (flags & ENQUEUE_WAKEUP)
> +     if (!task_new)
>               update_overutilized_status(rq);
>
>  enqueue_throttle:

Reply via email to