On 24.09.19 12:13, Jan Beulich wrote:
On 24.09.2019 12:06, Jürgen Groß wrote:
On 24.09.19 11:46, Jan Beulich wrote:
On 14.09.2019 10:52, Juergen Gross wrote:
@@ -366,18 +380,38 @@ static void sched_free_unit(struct sched_unit *unit)
xfree(unit);
}
+static void sched_unit_add_vcpu(struct sched_unit *unit, struct vcpu *v)
+{
+ v->sched_unit = unit;
+ if ( !unit->vcpu_list || unit->vcpu_list->vcpu_id > v->vcpu_id )
Is the right side needed? Aren't vCPU-s created in increasing order
of their IDs, and aren't we relying on this elsewhere too?
Idle vcpus are rather special and they require the second test.
How about a code comment to this effect?
Okay.
+ {
+ unit->vcpu_list = v;
+ unit->unit_id = v->vcpu_id;
This makes for a pretty strange set of IDs (non-successive), and
explains why patch 24 uses a local "unit_idx" instead of switching
from v->vcpu_id as array index to unit->unit_id. Is there a reason
you don't divide by the granularity here, eliminating the division
done e.g. ...
Cpus not in a cpupool are in single-vcpu units, so in order for not
having completely weird unit-ids after having move cpus a lot in and
out of cpupools keeping the current scheme is the only one I could
think of.
And how about extending the description to include this?
Okay.
+ }
+ unit->runstate_cnt[v->runstate.state]++;
+}
+
static struct sched_unit *sched_alloc_unit(struct vcpu *v)
{
struct sched_unit *unit, **prev_unit;
struct domain *d = v->domain;
+ for_each_sched_unit ( d, unit )
+ if ( unit->vcpu_list->vcpu_id / sched_granularity ==
... here. (I also don't see why you don't use unit->unit_id here.)
And is there a reason not to use unit->unit_id here then, which
is slightly cheaper to access?
Right, will change.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel