While load balancing an rq target, we look for the busiest group. This operation may require an uptodate rq clock if we end up calling scale_rt_power(). To this end, update it manually if the target is running tickless.
DOUBT: don't we actually also need this in vanilla kernel, in case this_cpu is in dyntick-idle mode? Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Alessio Igor Bogani <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Geoff Levand <[email protected]> Cc: Gilad Ben Yossef <[email protected]> Cc: Hakan Akkan <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Li Zhong <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Paul Gortmaker <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> --- kernel/sched/fair.c | 13 +++++++++++++ 1 files changed, 13 insertions(+), 0 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 698137d..473f50f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5023,6 +5023,19 @@ static int load_balance(int this_cpu, struct rq *this_rq, schedstat_inc(sd, lb_count[idle]); + /* + * find_busiest_group() may need an uptodate cpu clock + * for find_busiest_group() (see scale_rt_power()). If + * the CPU is nohz, it's clock may be stale. + */ + if (tick_nohz_full_cpu(this_cpu)) { + local_irq_save(flags); + raw_spin_lock(&this_rq->lock); + update_rq_clock(this_rq); + raw_spin_unlock(&this_rq->lock); + local_irq_restore(flags); + } + redo: group = find_busiest_group(&env, balance); -- 1.7.5.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

