On Sat, 11 Sept 2021 at 03:19, Ricardo Neri <ricardo.neri-calde...@linux.intel.com> wrote: > > Create a separate function, sched_asym(). A subsequent changeset will > introduce logic to deal with SMT in conjunction with asmymmetric > packing. Such logic will need the statistics of the scheduling > group provided as argument. Update them before calling sched_asym(). > > Cc: Aubrey Li <aubrey...@intel.com> > Cc: Ben Segall <bseg...@google.com> > Cc: Daniel Bristot de Oliveira <bris...@redhat.com> > Cc: Dietmar Eggemann <dietmar.eggem...@arm.com> > Cc: Mel Gorman <mgor...@suse.de> > Cc: Quentin Perret <qper...@google.com> > Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com> > Cc: Srinivas Pandruvada <srinivas.pandruv...@linux.intel.com> > Cc: Steven Rostedt <rost...@goodmis.org> > Cc: Tim Chen <tim.c.c...@linux.intel.com> > Reviewed-by: Joel Fernandes (Google) <j...@joelfernandes.org> > Reviewed-by: Len Brown <len.br...@intel.com> > Co-developed-by: Peter Zijlstra (Intel) <pet...@infradead.org> > Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> > Signed-off-by: Ricardo Neri <ricardo.neri-calde...@linux.intel.com>
Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org> > --- > Changes since v4: > * None > > Changes since v3: > * Remove a redundant check for the local group in sched_asym(). > (Dietmar) > * Reworded commit message for clarity. (Len) > > Changes since v2: > * Introduced this patch. > > Changes since v1: > * N/A > --- > kernel/sched/fair.c | 20 +++++++++++++------- > 1 file changed, 13 insertions(+), 7 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index c5851260b4d8..26db017c14a3 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -8597,6 +8597,13 @@ group_type group_classify(unsigned int imbalance_pct, > return group_has_spare; > } > > +static inline bool > +sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats > *sgs, > + struct sched_group *group) > +{ > + return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu); > +} > + > /** > * update_sg_lb_stats - Update sched_group's statistics for load balancing. > * @env: The load balancing environment. > @@ -8657,18 +8664,17 @@ static inline void update_sg_lb_stats(struct lb_env > *env, > } > } > > + sgs->group_capacity = group->sgc->capacity; > + > + sgs->group_weight = group->group_weight; > + > /* Check if dst CPU is idle and preferred to this group */ > if (!local_group && env->sd->flags & SD_ASYM_PACKING && > - env->idle != CPU_NOT_IDLE && > - sgs->sum_h_nr_running && > - sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) { > + env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && > + sched_asym(env, sds, sgs, group)) { > sgs->group_asym_packing = 1; > } > > - sgs->group_capacity = group->sgc->capacity; > - > - sgs->group_weight = group->group_weight; > - > sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs); > > /* Computing avg_load makes sense only when group is overloaded */ > -- > 2.17.1 >