On 08.04.2022 13:02, Julien Grall wrote:
> On 08/04/2022 08:16, Jan Beulich wrote:
>> See the code comment. The higher the rate of vCPU-s migrating across
>> pCPU-s, the less useful this attempted optimization actually is. With
>> credit2 the migration rate looks to be unduly high even on mostly idle
>> systems, and hence on large systems lock contention here isn't very
>> difficult to observe.
> 
> "high" and "large" is quite vague. Do you have more details on where you 
> observed this issue and the improvement after this patch?

I have no data beyond the observation on the failed 4.12 osstest flights,
where I mentioned I would make such a patch and send out as RFC.

>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1559,6 +1559,16 @@ void evtchn_move_pirqs(struct vcpu *v)
>>       unsigned int port;
>>       struct evtchn *chn;
>>   
>> +    /*
>> +     * The work done below is an attempt to keep pIRQ-s on the pCPU-s that 
>> the
>> +     * vCPU-s they're to be delivered to run on. In order to limit lock
>> +     * contention, check for an empty list prior to acquiring the lock. In 
>> the
>> +     * worst case a pIRQ just bound to this vCPU will be delivered elsewhere
>> +     * until the vCPU is migrated (again) to another pCPU.
>> +     */
> 
> AFAIU, the downside is another pCPU (and therefore vCPU) will get 
> disturbed by the interrupt.

But only rarely, i.e. in case a race would actually have occurred.

> Maybe we should revive "evtchn: convert 
> domain event lock to an r/w one"?

Not sure - the patch was rejected for there, overall, being too few
cases where read_lock() would suffice.

Jan


Reply via email to