On 01/04/2019 11:47, Juergen Gross wrote:
> On 01/04/2019 10:50, Andrew Cooper wrote:
>> On 29/03/2019 15:09, Juergen Gross wrote:
>>> Make sure the number of vcpus is always a multiple of the scheduling
>>> granularity. Note that we don't support a scheduling granularity above
>>> one on ARM.
>>
>> I'm afraid that I don't think this is a clever move.  In turn, this
>> brings into question the approach to idle handling.
>>
>> Firstly, with a proposed socket granularity, this would be 128 on some
>> systems which exist today.  Furthermore, consider the case where
>> cpupool0 has a granularity of 1, and a second pool has a granularity of
>> 2.  A domain can be created with an odd number of vcpus and operate in
>> pool0 fine, but can't now be moved to pool1.
> 
> For now granularity is the same for all pools, but I plan to enhance
> that in future.
> 
> The answer to that problem might be either to allow for later addition
> of dummy vcpus (e.g. by sizing only the vcpu pointer array to the needed
> number), or to really disallow moving such a domain between pools.
> 
>> If at all possible, I think it would be better to try and reuse the idle
>> cpus for holes like this.  Seeing as you've been playing with this code
>> a lot, what is your assessment?
> 
> This would be rather complicated. I'd either need to switch vcpus
> dynamically in schedule items, or I'd need to special case the idle
> vcpus in _lots_ of places.

I have thought more about this and maybe I have found a way to make that
less intrusive as I thought in the beginning.

I'll give it a try...


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to