On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
<ricardo.neri-calde...@linux.intel.com> wrote:
>
> Before deciding to pull tasks when using asymmetric packing of tasks,
> on some architectures (e.g., x86) it is necessary to know not only the
> state of dst_cpu but also of its SMT siblings. The decision to classify
> a candidate busiest group as group_asym_packing is done in
> update_sg_lb_stats(). Give this function access to the scheduling domain
> statistics, which contains the statistics of the local group.
>
> Cc: Aubrey Li <aubrey...@intel.com>
> Cc: Ben Segall <bseg...@google.com>
> Cc: Daniel Bristot de Oliveira <bris...@redhat.com>
> Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
> Cc: Mel Gorman <mgor...@suse.de>
> Cc: Quentin Perret <qper...@google.com>
> Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
> Cc: Srinivas Pandruvada <srinivas.pandruv...@linux.intel.com>
> Cc: Steven Rostedt <rost...@goodmis.org>
> Cc: Tim Chen <tim.c.c...@linux.intel.com>
> Reviewed-by: Joel Fernandes (Google) <j...@joelfernandes.org>
> Reviewed-by: Len Brown <len.br...@intel.com>
> Originally-by: Peter Zijlstra (Intel) <pet...@infradead.org>
> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
> Signed-off-by: Ricardo Neri <ricardo.neri-calde...@linux.intel.com>

Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org>

> ---
> Changes since v4:
>   * None
>
> Changes since v3:
>   * None
>
> Changes since v2:
>   * Introduced this patch.
>
> Changes since v1:
>   * N/A
> ---
>  kernel/sched/fair.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 7a054f528bcc..c5851260b4d8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8605,6 +8605,7 @@ group_type group_classify(unsigned int imbalance_pct,
>   * @sg_status: Holds flag indicating the status of the sched_group
>   */
>  static inline void update_sg_lb_stats(struct lb_env *env,
> +                                     struct sd_lb_stats *sds,
>                                       struct sched_group *group,
>                                       struct sg_lb_stats *sgs,
>                                       int *sg_status)
> @@ -8613,7 +8614,7 @@ static inline void update_sg_lb_stats(struct lb_env 
> *env,
>
>         memset(sgs, 0, sizeof(*sgs));
>
> -       local_group = cpumask_test_cpu(env->dst_cpu, sched_group_span(group));
> +       local_group = group == sds->local;
>
>         for_each_cpu_and(i, sched_group_span(group), env->cpus) {
>                 struct rq *rq = cpu_rq(i);
> @@ -9176,7 +9177,7 @@ static inline void update_sd_lb_stats(struct lb_env 
> *env, struct sd_lb_stats *sd
>                                 update_group_capacity(env->sd, env->dst_cpu);
>                 }
>
> -               update_sg_lb_stats(env, sg, sgs, &sg_status);
> +               update_sg_lb_stats(env, sds, sg, sgs, &sg_status);
>
>                 if (local_group)
>                         goto next_group;
> --
> 2.17.1
>

Reply via email to