On 26.06.2025 14:17, Teddy Astie wrote:
> Le 26/06/2025 à 13:46, Juergen Gross a écrit :
>> On 26.06.25 13:34, Oleksii Kurochko wrote:
>>>
>>> On 6/26/25 12:41 PM, Jan Beulich wrote:
>>> - Minimized inter-CPU TLB flushes — since VMIDs are local, TLB entries 
>>> don’t need
>>>    to be invalidated on other CPUs when reused.
>>> - Better scalability — this approach works better on systems with a 
>>> large number
>>>    of CPUs.
>>> - Frequent VM switches don’t require global TLB flushes — reducing the 
>>> overhead
>>>    of context switching.
>>> However, the downside is that this model consumes more VMIDs. For 
>>> example,
>>> if a single domain runs on 4 vCPUs across 4 CPUs, it will consume 4 
>>> VMIDs instead
>>> of just one.
>>
>> Consider you have 4 bits for VMIDs, resulting in 16 VMID values.
>>
>> If you have a system with 32 physical CPUs and 32 domains with 1 vcpu each
>> on that system, your scheme would NOT allow to keep each physical cpu busy
>> by running a domain on it, as only 16 domains could be active at the same
>> time.
> 
> Why not instead consider dropping use of VMID in case there is no one 
> remaining ?
> (i.e systematically flush the guest TLB before entering the vcpu and 
> using a "blank" VMID)

Why would one want to do that, when there's a better scheme available?
And how would you decide which VMs to penalize?

> I don't expect a lot of platforms to allow for 32 pCPU while not giving 
> more than 16 VMID values. So it would just be less efficient in that 
> case at worst.

How would you know? How many CPUs (cores) to have in a system is entirely
independent of the capabilities of the individual CPUs.

Jan

Reply via email to