The use, or not, of memory clobbers on the VMX instructions is complicated.
There are two separate aspects to consider: 1. Originally, the VMX instructions used hardcoded bytes, including memory encodings. Therefore, the compiler couldn't see the correct relationship between parameters. The memory clobber for this purpose should have been dropped when switching to mnemonics. This covers INVEPT and INVVPID, each of which has no change in memory, nor in fact the current address space in use. 2. Most of these instructions operate on a VMCS region. This is a (mostly) opaque data structure, operated on by physical address. Again, this hides the relationship between the instructions and the VMCS from the compiler. The processor might use internal state to cache the VMCS (writing it back to memory on VMCLEAR), or it might operate on memory directly. Because the VMCS is opaque (so the compiler has nothing interesting to know about it), and because VMREAD/VMWRITE have chosen not to use a memory clobber (to tell the compiler that something changed), none of the other VMX instructions should use a memory clobber either. This covers VMXON, VMXOFF, VMPTRLD and VMPTCLEAR. Make the use of memory clobbers consistent for all VMX instructions. Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com> --- CC: Jan Beulich <jbeul...@suse.com> CC: Roger Pau Monné <roger....@citrix.com> Based on top of the posted cleanup. --- xen/arch/x86/hvm/vmx/vmcs.c | 4 ++-- xen/arch/x86/include/asm/hvm/vmx/vmx.h | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index a44475ae15bd..6daaeaa656a8 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -754,7 +754,7 @@ static int _vmx_cpu_up(bool bsp) _ASM_EXTABLE(1b, %l[vmxon_fault]) : : [addr] "m" (this_cpu(vmxon_region)) - : "memory" + : : vmxon_fail, vmxon_fault ); this_cpu(vmxon) = 1; @@ -811,7 +811,7 @@ void cf_check vmx_cpu_down(void) BUG_ON(!(read_cr4() & X86_CR4_VMXE)); this_cpu(vmxon) = 0; - asm volatile ( "vmxoff" ::: "memory" ); + asm volatile ( "vmxoff" ); local_irq_restore(flags); } diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmx.h b/xen/arch/x86/include/asm/hvm/vmx/vmx.h index 33d3d43a3851..86349d4a5431 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h @@ -298,7 +298,7 @@ static always_inline void __vmptrld(u64 addr) "jbe %l[vmfail]" : : [addr] "m" (addr) - : "memory" + : : vmfail ); return; @@ -312,7 +312,7 @@ static always_inline void __vmpclear(u64 addr) "jbe %l[vmfail]" : : [addr] "m" (addr) - : "memory" + : : vmfail ); return; @@ -408,7 +408,7 @@ static always_inline void __invept(unsigned long type, uint64_t eptp) "jbe %l[vmfail]" : : [operand] "m" (operand), [type] "r" (type) - : "memory" + : : vmfail ); return; @@ -430,7 +430,7 @@ static always_inline void __invvpid(unsigned long type, u16 vpid, u64 gva) "2:" _ASM_EXTABLE(1b, 2b) : : [operand] "m" (operand), [type] "r" (type) - : "memory" + : : vmfail ); return; base-commit: befc384d21784affa3daf2abc85b02500e4dc545 prerequisite-patch-id: 727f158a737c65bca1e4210a6bc132302037f55c prerequisite-patch-id: 3f88b36cbba121e9893b4c8b52df9aea4f58e8ad prerequisite-patch-id: c0b199b9ceb1ddd659e27ec8571741bd78ae6265 -- 2.39.5