On 20/12/2018 07:55, Vincent Guittot wrote: > When check_asym_packing() is triggered, the imbalance is set to : > busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE > But busiest_stat.avg_load equals > sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity > These divisions can generate a rounding that will make imbalance slightly > lower than the weighted load of the cfs_rq. > But this is enough to skip the rq in find_busiest_queue and prevents asym > migration to happen. > > Directly set imbalance to sgs->group_load to remove the rounding. ^^^^^^^^^^^^^^^ I see where that's coming from, but using 'sgs' here is tad confusing since 'sds->busiest_stat' is what's actually used.
Maybe just something like 'the busiest's sgs->group_load' would be good enough to make things explicit. > > Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org> > --- > kernel/sched/fair.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index ca46964..9b31247 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -8476,9 +8476,7 @@ static int check_asym_packing(struct lb_env *env, > struct sd_lb_stats *sds) > if (sched_asym_prefer(busiest_cpu, env->dst_cpu)) > return 0; > > - env->imbalance = DIV_ROUND_CLOSEST( > - sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity, > - SCHED_CAPACITY_SCALE); > + env->imbalance = sds->busiest_stat.avg_load; That should be group_load, not avg_load. With that fixed: Reviewed-by: Valentin Schneider <valentin.schnei...@arm.com> > > return 1; > } >