Sean Christopherson <[email protected]> writes:
> Gah, I typed too slow :-)

Haha. I had the same thought.

> On Wed, Jun 10, 2020 at 11:34:21PM +0200, Thomas Gleixner wrote:
>> We have exception fixups to avoid exactly that kind of horrible
>> workarounds all over the place.
>> 
>> static inline int cpu_vmxoff_safe(void)
>> {
>>         int err;
>>  
>>      asm volatile("2: vmxoff; xor %[err],%[err]\n"
>>                   "1:\n\t"
>>                   ".section .fixup,\"ax\"\n\t"
>>                   "3:  mov %[fault],%[err] ; jmp 1b\n\t"
>>                   ".previous\n\t"
>>                   _ASM_EXTABLE(2b, 3b)
>>                   : [err] "=a" (err)
>>                   : [fault] "i" (-EFAULT)
>>                   : "memory");
>>         return err;
>> }
>> 
>> static inline void __cpu_emergency_vmxoff(void)
>> {
>>         if (!cpu_vmx_enabled())
>>              return;
>>         if (!cpu_vmxoff_safe())
>>              cr4_clear_bits(X86_CR4_VMXE);
>
> This bit is wrong, CR4.VMXE should be cleared even if VMXOFF faults, e.g.
> if this is called in NMI context and the NMI arrived in KVM code between
> VMXOFF and clearing CR4.VMXE.

Oh, right.

> All other VMXOFF faults are mode related, i.e. any fault is guaranteed to
> be due to the !post-VMXON check unless we're magically in RM, VM86, compat
> mode, or at CPL>0.

Your patch is simpler indeed.

Thanks,

        tglx

Reply via email to