Jan Kiszka <[email protected]> writes:

> In order to access the shadow VMCS, we need to load it. At this point,
> vmx->loaded_vmcs->vmcs and the actually loaded one start to differ. If
> we now get preempted by Linux, vmx_vcpu_put and, on return, the
> vmx_vcpu_load will work against the wrong vmcs. That can cause
> copy_shadow_to_vmcs12 to corrupt the vmcs12 state.

Ouch! I apologize if I missed this in the previous discussion but why do
we never get into this condition while running a Linux guest ?

Will there be a performance impact of this change ? I hope it's 
negligible though..

> Fix the issue by disabling preemption during the copy operation.
>
> copy_vmcs12_to_shadow is safe from this issue as it is executed by
> vmx_vcpu_run when preemption is already disabled before vmentry.
>
> Signed-off-by: Jan Kiszka <[email protected]>
> ---
>
> This fixes specifically Jailhouse in KVM on CPUs with shadow VMCS
> support.
>
>  arch/x86/kvm/vmx.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 04fa1b8..f3de106 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -6417,6 +6417,8 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx)
>       const unsigned long *fields = shadow_read_write_fields;
>       const int num_fields = max_shadow_read_write_fields;
>  
> +     preempt_disable();
> +
>       vmcs_load(shadow_vmcs);
>  
>       for (i = 0; i < num_fields; i++) {
> @@ -6440,6 +6442,8 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx)
>  
>       vmcs_clear(shadow_vmcs);
>       vmcs_load(vmx->loaded_vmcs->vmcs);
> +
> +     preempt_enable();
>  }
>  
>  static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx)
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to