On Sat, 11 Sept 2021 at 03:19, Ricardo Neri <ricardo.neri-calde...@linux.intel.com> wrote: > > There exist situations in which the load balance needs to know the > properties of the CPUs in a scheduling group. When using asymmetric > packing, for instance, the load balancer needs to know not only the > state of dst_cpu but also of its SMT siblings, if any. > > Use the flags of the child scheduling domains to initialize scheduling > group flags. This will reflect the properties of the CPUs in the > group. > > A subsequent changeset will make use of these new flags. No functional > changes are introduced. > > Cc: Aubrey Li <aubrey...@intel.com> > Cc: Ben Segall <bseg...@google.com> > Cc: Daniel Bristot de Oliveira <bris...@redhat.com> > Cc: Dietmar Eggemann <dietmar.eggem...@arm.com> > Cc: Mel Gorman <mgor...@suse.de> > Cc: Quentin Perret <qper...@google.com> > Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com> > Cc: Srinivas Pandruvada <srinivas.pandruv...@linux.intel.com> > Cc: Steven Rostedt <rost...@goodmis.org> > Cc: Tim Chen <tim.c.c...@linux.intel.com> > Reviewed-by: Joel Fernandes (Google) <j...@joelfernandes.org> > Reviewed-by: Len Brown <len.br...@intel.com> > Originally-by: Peter Zijlstra (Intel) <pet...@infradead.org> > Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> > Signed-off-by: Ricardo Neri <ricardo.neri-calde...@linux.intel.com>
Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org> > --- > Changes since v4: > * None > > Changes since v3: > * Clear the flags of the scheduling groups of a domain if its child is > destroyed. > * Minor rewording of the commit message. > > Changes since v2: > * Introduced this patch. > > Changes since v1: > * N/A > --- > kernel/sched/sched.h | 1 + > kernel/sched/topology.c | 21 ++++++++++++++++++--- > 2 files changed, 19 insertions(+), 3 deletions(-) > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 3d3e5793e117..86ab33ce529d 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -1809,6 +1809,7 @@ struct sched_group { > unsigned int group_weight; > struct sched_group_capacity *sgc; > int asym_prefer_cpu; /* CPU of highest > priority in group */ > + int flags; > > /* > * The CPUs this group covers. > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 4e8698e62f07..c56faae461d9 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -716,8 +716,20 @@ cpu_attach_domain(struct sched_domain *sd, struct > root_domain *rd, int cpu) > tmp = sd; > sd = sd->parent; > destroy_sched_domain(tmp); > - if (sd) > + if (sd) { > + struct sched_group *sg = sd->groups; > + > + /* > + * sched groups hold the flags of the child sched > + * domain for convenience. Clear such flags since > + * the child is being destroyed. > + */ > + do { > + sg->flags = 0; > + } while (sg != sd->groups); > + > sd->child = NULL; > + } > } > > for (tmp = sd; tmp; tmp = tmp->parent) > @@ -916,10 +928,12 @@ build_group_from_child_sched_domain(struct sched_domain > *sd, int cpu) > return NULL; > > sg_span = sched_group_span(sg); > - if (sd->child) > + if (sd->child) { > cpumask_copy(sg_span, sched_domain_span(sd->child)); > - else > + sg->flags = sd->child->flags; > + } else { > cpumask_copy(sg_span, sched_domain_span(sd)); > + } > > atomic_inc(&sg->ref); > return sg; > @@ -1169,6 +1183,7 @@ static struct sched_group *get_group(int cpu, struct > sd_data *sdd) > if (child) { > cpumask_copy(sched_group_span(sg), sched_domain_span(child)); > cpumask_copy(group_balance_mask(sg), sched_group_span(sg)); > + sg->flags = child->flags; > } else { > cpumask_set_cpu(cpu, sched_group_span(sg)); > cpumask_set_cpu(cpu, group_balance_mask(sg)); > -- > 2.17.1 >