On 19/12/2018 15:20, Vincent Guittot wrote: [...] >> Oh yes, I never said it didn't work - I was doing some investigation on >> the reason as to why we'd need this fix, because it's wasn't explicit from >> the commit message. >> >> The rounding errors are countered by the +1, yes, but I'd rather remove >> the errors altogether and go for the snippet I suggested in my previous >> reply. > > except that you don't always want to migrate all group load. > I prefer keeping current algorithm and fix it for now. Trying "new" > thing can come in a 2nd step
We already set the imbalance as the whole group load, we just introduce rounding errors inbetween. As I've already said: in update_sg_lb_stats() we do: sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity; and in check_asym_packing() we do: env->imbalance = DIV_ROUND_CLOSEST( sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity, SCHED_CAPACITY_SCALE) So we end up with something like: group_load * SCHED_CAPACITY_SCALE * group_capacity imbalance = -------------------------------------------------- group_capacity * SCHED_CAPACITY_SCALE Which we could reduce down to: imbalance = group_load and not get any rounding errors.