On Tue, 16 Jun 2020 at 08:05, Peng Wang <rock...@linux.alibaba.com> wrote:
>
> While looking at enqueue_task_fair and dequeue_task_fair, it occurred
> to me that dequeue_task_fair can also be optimized as Vincent described
> in commit 7d148be69e3a ("sched/fair: Optimize enqueue_task_fair()").
>
> When encountering throttled cfs_rq, dequeue_throttle label can ensure
> se not to be NULL, and rq->nr_running remains unchanged, so we can also
> skip the early balance check.
>
> Signed-off-by: Peng Wang <rock...@linux.alibaba.com>

Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org>

> ---
>  kernel/sched/fair.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index cbcb2f7..05242b7 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5614,14 +5614,14 @@ static void dequeue_task_fair(struct rq *rq, struct 
> task_struct *p, int flags)
>
>         }
>
> -dequeue_throttle:
> -       if (!se)
> -               sub_nr_running(rq, 1);
> +       /* At this point se is NULL and we are at root level*/
> +       sub_nr_running(rq, 1);
>
>         /* balance early to pull high priority tasks */
>         if (unlikely(!was_sched_idle && sched_idle_rq(rq)))
>                 rq->next_balance = jiffies;
>
> +dequeue_throttle:
>         util_est_dequeue(&rq->cfs, p, task_sleep);
>         hrtick_update(rq);
>  }
> --
> 2.9.5
>

Reply via email to