On 23/03/2019 03:32, Dario Faggioli wrote:
> On Fri, 2019-03-22 at 10:04 +0100, Juergen Gross wrote:
>> Credit2's smt_idle_mask_set() and smt_idle_mask_clear() are used to
>> identify idle cores where vcpus can be moved to. A core is thought to
>> be idle when all siblings are known to have the idle vcpu running on
>> them.
>>
>> Unfortunately the information of a vcpu running on a cpu is per
>> runqueue. So in case not all siblings are in the same runqueue a core
>> will never be regarded to be idle, as the sibling not in the runqueue
>> is never known to run the idle vcpu.
>>
> Good catch.
> 
> I apparently forgot to take care of this, when introduced the
> possibility of having per single CPU runqueue (which, in an SMT enabled
> system, would mean per-thread runqs).
> 
>> This problem can be solved by and-ing the core's sibling cpumask with
>> the runqueue's active mask before doing the idle test.
>>
> Right. There's one thing, though. Using one runq per CPU, in this
> scheduler, is a really poor choice, and I basically would recommend it
> only for testing or debugging (and this should probably be highlighted
> a lot better in the docs).
> 
> Therefore, I'm a bit reluctant at adding another cpumask bitwise
> operation, in hot paths, just for taking care of it.
> 
> Note that this also applies to cpupools, i.e., I also consider a very
> poor choice putting two sibling hyperthreads in different pools. If you
> recall, I even sent a patch to forbid doing that (which is still
> blocked on a series of yours for passing parameters from the tools to
> the hypervisor).
> 
> The only case I care, is a CPU being off-lined.

In my core scheduling solution we only ever have one active sibling per
core.

> So, one thing that we could do is to put credit2_runqueue=cpu inside
> such #ifdef-s too (and I can prepare a patch to that effect myself, if
> you want).
> 
> To properly deal with offline CPUs, I think we can change the logic a
> little, i.e., we initialize the smt_idle mask to all-1 (all CPUs idle),
> and we also make sure that we set the CPU bit (instead of learing it)
> in smt_idle, when we remove the CPU from the scheduler.

How does that help?

Only if all siblings are marked as idle in rqd->idle we set any bits
in rqd->smt_idle (all siblings).

Or did you mean rqd->idle instead?

This might be problematic in case of runqueue per cpu, though.

Another idea: we could introduce a credit2 pcpu data cpumask pointer
for replacement of the cpu_sibling_mask. For runqueue per cpu it would
pount to cpumask_of(cpu), for the "normal case" it would point to the
correct cpu_sibling_mask, and for special cases we could allocate a
mask if needed.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to