2018-01-27 09:50+0100, Paolo Bonzini:
> Place the MSR bitmap in struct loaded_vmcs, and update it in place
> every time the x2apic or APICv state can change.  This is rare and
> the loop can handle 64 MSRs per iteration, in a similar fashion as
> nested_vmx_prepare_msr_bitmap.
> 
> This prepares for choosing, on a per-VM basis, whether to intercept
> the SPEC_CTRL and PRED_CMD MSRs.
> 
> Suggested-by: Jim Mattson <jmatt...@google.com>
> Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
> ---
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> @@ -10022,7 +10043,7 @@ static inline bool nested_vmx_merge_msr_bitmap(struct 
> kvm_vcpu *vcpu,
>       int msr;
>       struct page *page;
>       unsigned long *msr_bitmap_l1;
> -     unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.msr_bitmap;
> +     unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap;

The physical address of the nested msr_bitmap is never loaded into vmcs.

The resolution you provided had extra hunk in prepare_vmcs02_full():

+       vmcs_write64(MSR_BITMAP, __pa(vmx->nested.vmcs02.msr_bitmap));

I have queued that as:

+       if (cpu_has_vmx_msr_bitmap())
+               vmcs_write64(MSR_BITMAP, __pa(vmx->nested.vmcs02.msr_bitmap));

but it should be a part of the patch or a followup fix.

Is the branch already merged into PTI?

Thanks.

>  
>       /* This shortcut is ok because we support only x2APIC MSRs so far. */
>       if (!nested_cpu_has_virt_x2apic_mode(vmcs12))
> @@ -11397,7 +11418,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu 
> *vcpu,
>       vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
>  
>       if (cpu_has_vmx_msr_bitmap())
> -             vmx_set_msr_bitmap(vcpu);
> +             vmx_update_msr_bitmap(vcpu);
>  
>       if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
>                               vmcs12->vm_exit_msr_load_count))
> -- 
> 1.8.3.1
> 

Reply via email to