because it is cheaper, and there is no much point in randomizing which cpu gets selected anyway, as such choice will be overridden shortly after, in runq_tickle().
If we really feel the need (e.g., we prove it worth with benchmarking), we can record the last cpu which was used by csched2_cpu_pick() and migrate() in a per-runq variable, and then use cpumask_cycle()... but this really does not look necessary. Signed-off-by: Dario Faggioli <dario.faggi...@citrix.com> --- Cc: George Dunlap <george.dun...@citrix.com> Cc: Anshul Makkar <anshul.mak...@citrix.com> Cc: David Vrabel <david.vra...@citrix.com> --- xen/common/sched_credit2.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index a8b3a85..afd432e 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -1545,7 +1545,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) { cpumask_and(cpumask_scratch, vc->cpu_hard_affinity, &svc->migrate_rqd->active); - new_cpu = cpumask_any(cpumask_scratch); + new_cpu = cpumask_first(cpumask_scratch); if ( new_cpu < nr_cpu_ids ) goto out_up; } @@ -1604,7 +1604,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) cpumask_and(cpumask_scratch, vc->cpu_hard_affinity, &prv->rqd[min_rqi].active); - new_cpu = cpumask_any(cpumask_scratch); + new_cpu = cpumask_first(cpumask_scratch); BUG_ON(new_cpu >= nr_cpu_ids); out_up: @@ -1718,7 +1718,7 @@ static void migrate(const struct scheduler *ops, cpumask_and(cpumask_scratch, svc->vcpu->cpu_hard_affinity, &trqd->active); - svc->vcpu->processor = cpumask_any(cpumask_scratch); + svc->vcpu->processor = cpumask_first(cpumask_scratch); ASSERT(svc->vcpu->processor < nr_cpu_ids); __runq_assign(svc, trqd); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel