On 23.07.2019 16:29, Juergen Gross wrote:
> On 23.07.19 16:14, Jan Beulich wrote:
>> On 23.07.2019 16:03, Jan Beulich wrote:
>>> On 23.07.2019 15:44, Juergen Gross wrote:
>>>> On 23.07.19 14:42, Jan Beulich wrote:
>>>>> v->processor gets latched into st->processor before raising the softirq,
>>>>> but can't the vCPU be moved elsewhere by the time the softirq handler
>>>>> actually gains control? If that's not possible (and if it's not obvious
>>>>> why, and as you can see it's not obvious to me), then I think a code
>>>>> comment wants to be added there.
>>>>
>>>> You are right, it might be possible for the vcpu to move around.
>>>>
>>>> OTOH is it really important to run the target vcpu exactly on the cpu
>>>> it is executing (or has last executed) at the time the NMI/MCE is being
>>>> queued? This is in no way related to the cpu the MCE or NMI has been
>>>> happening on. It is just a random cpu, and so it would be if we'd do the
>>>> cpu selection when the softirq handler is running.
>>>>
>>>> One question to understand the idea nehind all that: _why_ is the vcpu
>>>> pinned until it does an iret? I could understand if it would be pinned
>>>> to the cpu where the NMI/MCE was happening, but this is not the case.
>>>
>>> Then it was never finished or got broken, I would guess.
>>
>> Oh, no. The #MC side use has gone away in 3a91769d6e, without cleaning
>> up other code. So there doesn't seem to be any such requirement anymore.
> 
> So just to be sure: you are fine for me removing the pinning for NMIs?

No, not the pinning as a whole. The forced CPU0 affinity should still
remain. It's just that there's no correlation anymore between the CPU
a vCPU was running on and the CPU it is to be pinned to (temporarily).

What can go away if the #MC part of the logic.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to