On 29/03/2019 16:39, Jan Beulich wrote:
>>>> On 29.03.19 at 16:08, <jgr...@suse.com> wrote:
>> Via boot parameter sched_granularity=core (or sched_granularity=socket)
>> it is possible to change the scheduling granularity from thread (the
>> default) to either whole cores or even sockets.
>>
>> All logical cpus (threads) of the core or socket are always scheduled
>> together. This means that on a core always vcpus of the same domain
>> will be active, and those vcpus will always be scheduled at the same
>> time.
>>
>> This is achieved by switching the scheduler to no longer see vcpus as
>> the primary object to schedule, but "schedule items". Each schedule
>> item consists of as many vcpus as each core has threads on the current
>> system. The vcpu->item relation is fixed.
> 
> Hmm, I find this surprising: A typical guest would have more vCPU-s
> than there are threads per core. So if two of them want to run, but
> each is associated with a different core, you'd need two cores instead
> of one to actually fulfill the request? I could see this necessarily being

Correct.

> the case if you arranged vCPU-s into virtual threads, cores, sockets,
> and nodes, but at least from the patch titles it doesn't look as if you
> did in this series. Are there other reasons to make this a fixed
> relationship?

In fact I'm doing it, but only implicitly and without adapting the
cpuid related information. The idea is to pass the topology information
at least below the scheduling granularity to the guest later.

Not having the fixed relationship would result in something like the
co-scheduling series Dario already sent, which would need more than
mechanical changes in each scheduler.

> As a minor cosmetic request visible from this cover letter right away:
> Could the command line option please become "sched-granularity="
> or even "sched-gran="?

Of course!


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to