On 07/04/2025 1:00 pm, Jan Beulich wrote:
> On 07.04.2025 12:45, Andrew Cooper wrote:
>> The use, or not, of memory clobbers on the VMX instructions is complicated.
>>
>> There are two separate aspects to consider:
>>
>> 1. Originally, the VMX instructions used hardcoded bytes, including memory
>>    encodings.  Therefore, the compiler couldn't see the correct relationship
>>    between parameters.  The memory clobber for this purpose should have been
>>    dropped when switching to mnemonics.
>>
>>    This covers INVEPT and INVVPID, each of which has no change in memory, nor
>>    in fact the current address space in use.
> Yet then they need to come after respective table modifications.

They don't AFAICT, but the reasoning is complicated.  I'll expand on it
in v2.

>
>> 2. Most of these instructions operate on a VMCS region.  This is a (mostly)
>>    opaque data structure, operated on by physical address.  Again, this hides
>>    the relationship between the instructions and the VMCS from the compiler.
>>
>>    The processor might use internal state to cache the VMCS (writing it back
>>    to memory on VMCLEAR), or it might operate on memory directly.
>>
>>    Because the VMCS is opaque (so the compiler has nothing interesting to 
>> know
>>    about it), and because VMREAD/VMWRITE have chosen not to use a memory
>>    clobber (to tell the compiler that something changed), none of the other
>>    VMX instructions should use a memory clobber either.
> For this, there's actually a good example below, with everything needed in
> context.
>
>>    This covers VMXON, VMXOFF, VMPTRLD and VMPTCLEAR.
> Nit: The last insn is VMCLEAR.

Oh, so it is, and we've got an incorrectly named wrapper.

>
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -754,7 +754,7 @@ static int _vmx_cpu_up(bool bsp)
>>                 _ASM_EXTABLE(1b, %l[vmxon_fault])
>>                 :
>>                 : [addr] "m" (this_cpu(vmxon_region))
>> -               : "memory"
>> +               :
>>                 : vmxon_fail, vmxon_fault );
>>  
>>      this_cpu(vmxon) = 1;
>> @@ -811,7 +811,7 @@ void cf_check vmx_cpu_down(void)
>>  
>>      BUG_ON(!(read_cr4() & X86_CR4_VMXE));
>>      this_cpu(vmxon) = 0;
>> -    asm volatile ( "vmxoff" ::: "memory" );
>> +    asm volatile ( "vmxoff" );
> With the clobber dropped, the compiler is free to re-order the prior store
> with the asm(), despite the "volatile", isn't it? [1] This may then be
> applicable elsewhere as well.

Yeah, these might better stay as they are.

~Andrew

Reply via email to