On 12 May 2016 at 21:42, Yuyang Du <yuyang...@intel.com> wrote: > On Thu, May 12, 2016 at 03:31:27AM -0700, tip-bot for Peter Zijlstra wrote: >> Commit-ID: 1be0eb2a97d756fb7dd8c9baf372d81fa9699c09 >> Gitweb: >> http://git.kernel.org/tip/1be0eb2a97d756fb7dd8c9baf372d81fa9699c09 >> Author: Peter Zijlstra <pet...@infradead.org> >> AuthorDate: Fri, 6 May 2016 12:21:23 +0200 >> Committer: Ingo Molnar <mi...@kernel.org> >> CommitDate: Thu, 12 May 2016 09:55:33 +0200 >> >> sched/fair: Clean up scale confusion >> >> Wanpeng noted that the scale_load_down() in calculate_imbalance() was >> weird. I agree, it should be SCHED_CAPACITY_SCALE, since we're going >> to compare against busiest->group_capacity, which is in [capacity] >> units.
In fact, load_above_capacity is only about load and not about capacity. load_above_capacity -= busiest->group_capacity is an optimization (may be a wronf one) of load_above_capacity -= busiest->group_capacity * SCHED_LOAD_SCALE / SCHED_CAPACITY_SCALE so we subtract load to load >> >> Reported-by: Wanpeng Li <wanpeng...@hotmail.com> >> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> >> Cc: Linus Torvalds <torva...@linux-foundation.org> >> Cc: Mike Galbraith <efa...@gmx.de> >> Cc: Morten Rasmussen <morten.rasmus...@arm.com> >> Cc: Peter Zijlstra <pet...@infradead.org> >> Cc: Thomas Gleixner <t...@linutronix.de> >> Cc: Yuyang Du <yuyang...@intel.com> >> Cc: linux-kernel@vger.kernel.org >> Signed-off-by: Ingo Molnar <mi...@kernel.org> > > It is good that this issue is addressed and patch merged, however, for the > record, Vincent has already had a solution for this, and we had a patch, > including other cleanups (the latest version is: > https://lkml.org/lkml/2016/5/3/925). > And I think Ben first pointed this out (and we then attempted to address it) > as far as I can tell.