Load balancing, when happening, at the end of a "scheduler epoch", can trigger vcpu migration, which in its turn may call runq_tickle(). If the cpu where this happens was idle, but we're now going to schedule a vcpu on it, let's update the runq's idle cpus mask accordingly _before_ doing load balancing.
Not doing that, in fact, may cause runq_tickle() to think that the cpu is still idle, and tickle it to go pick up a vcpu from the runqueue, which might be wrong/unideal. Signed-off-by: Dario Faggioli <dfaggi...@suse.com> --- Cc: George Dunlap <george.dun...@citrix.com> --- xen/common/sched_credit2.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 2b16bcea21..72fed2dd18 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3554,6 +3554,13 @@ csched2_schedule( __set_bit(__CSFLAG_scheduled, &snext->flags); } + /* Clear the idle mask if necessary */ + if ( cpumask_test_cpu(cpu, &rqd->idle) ) + { + __cpumask_clear_cpu(cpu, &rqd->idle); + smt_idle_mask_clear(cpu, &rqd->smt_idle); + } + /* * The reset condition is "has a scheduler epoch come to an end?". * The way this is enforced is checking whether the vcpu at the top @@ -3574,13 +3581,6 @@ csched2_schedule( balance_load(ops, cpu, now); } - /* Clear the idle mask if necessary */ - if ( cpumask_test_cpu(cpu, &rqd->idle) ) - { - __cpumask_clear_cpu(cpu, &rqd->idle); - smt_idle_mask_clear(cpu, &rqd->smt_idle); - } - snext->start_time = now; snext->tickled_cpu = -1; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel