On 19/09/16 14:40, Peter Zijlstra wrote: > On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote: >> Dietmar, Ingo, Tejun, >> >> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9 >> sched/core: Store maximum per-CPU capacity in root domain >> >> I get tons of messages from the scheduler like >> [..] >> span: 0-15 (max cpu_capacity = 589) >> span: 0-15 (max cpu_capacity = 589) >> span: 0-15 (max cpu_capacity = 589) >> span: 0-15 (max cpu_capacity = 589) >> [..] >> > > Oh, oops ;-) > > Something like the below ought to cure I think.
Haven't tested it in kvm guests with libvirt env. This message makes sense for asymmetric compute capacities (ARM big.LITTLE) for a setup where cpu_capacity = 1024 (a logical cpu w/o SMT) can't be assumed for the big cpus. I also tells you that you run in an SMT env. (2 hw threads hence 589) but this is probably less important. Guarding it w/ sched_debug_enabled makes sense for this. > --- > kernel/sched/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index f5f7b3cdf0be..fdc9e311fd29 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask > *cpu_map, > } > rcu_read_unlock(); > > - if (rq) { > + if (rq && sched_debug_enabled) { > pr_info("span: %*pbl (max cpu_capacity = %lu)\n", > cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity); > } >