On Fri, 20 Nov 2020 at 10:06, Mel Gorman <mgor...@techsingularity.net> wrote:
>
> In find_idlest_group(), the load imbalance is only relevant when the group
> is either overloaded or fully busy but it is calculated unconditionally.
> This patch moves the imbalance calculation to the context it is required.
> Technically, it is a micro-optimisation but really the benefit is avoiding
> confusing one type of imbalance with another depending on the group_type
> in the next patch.
>
> No functional change.
>
> Signed-off-by: Mel Gorman <mgor...@techsingularity.net>

Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org>

> ---
>  kernel/sched/fair.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 5fbed29e4001..9aded12aaa90 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8777,9 +8777,6 @@ find_idlest_group(struct sched_domain *sd, struct 
> task_struct *p, int this_cpu)
>                         .group_type = group_overloaded,
>         };
>
> -       imbalance = scale_load_down(NICE_0_LOAD) *
> -                               (sd->imbalance_pct-100) / 100;
> -
>         do {
>                 int local_group;
>
> @@ -8833,6 +8830,11 @@ find_idlest_group(struct sched_domain *sd, struct 
> task_struct *p, int this_cpu)
>         switch (local_sgs.group_type) {
>         case group_overloaded:
>         case group_fully_busy:
> +
> +               /* Calculate allowed imbalance based on load */
> +               imbalance = scale_load_down(NICE_0_LOAD) *
> +                               (sd->imbalance_pct-100) / 100;
> +
>                 /*
>                  * When comparing groups across NUMA domains, it's possible 
> for
>                  * the local domain to be very lightly loaded relative to the
> --
> 2.26.2
>

Reply via email to