2016-11-29 18:14-0800, David Matlack:
> KVM emulates MSR_IA32_VMX_CR{0,4}_FIXED1 with the value -1ULL, meaning
> all CR0 and CR4 bits are allowed to be 1 during VMX operation.
> 
> This does not match real hardware, which disallows the high 32 bits of
> CR0 to be 1, and disallows reserved bits of CR4 to be 1 (including bits
> which are defined in the SDM but missing according to CPUID). A guest
> can induce a VM-entry failure by setting these bits in GUEST_CR0 and
> GUEST_CR4, despite MSR_IA32_VMX_CR{0,4}_FIXED1 indicating they are
> valid.
> 
> Since KVM has allowed all bits to be 1 in CR0 and CR4, the existing
> checks on these registers do not verify must-be-0 bits. Fix these checks
> to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1.
> 
> This patch should introduce no change in behavior in KVM, since these
> MSRs are still -1ULL.
> 
> Signed-off-by: David Matlack <dmatl...@google.com>
> ---
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> @@ -4104,6 +4110,40 @@ static void ept_save_pdptrs(struct kvm_vcpu *vcpu)
> +static bool nested_guest_cr0_valid(struct kvm_vcpu *vcpu, unsigned long val)
> +{
> +     u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed0;
> +     u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed1;
> +     struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> +
> +     if (to_vmx(vcpu)->nested.nested_vmx_secondary_ctls_high &
> +             SECONDARY_EXEC_UNRESTRICTED_GUEST &&
> +         nested_cpu_has2(vmcs12, SECONDARY_EXEC_UNRESTRICTED_GUEST))
> +             fixed0 &= ~(X86_CR0_PE | X86_CR0_PG);

These bits also seem to be guaranteed in fixed1 ... complicated
dependencies.

There is another exception, SDM 26.3.1.1 (Checks on Guest Control
Registers, Debug Registers, and MSRs):

  Bit 29 (corresponding to CR0.NW) and bit 30 (CD) are never checked
  because the values of these bits are not changed by VM entry; see
  Section 26.3.2.1.

And another check:

  If bit 31 in the CR0 field (corresponding to PG) is 1, bit 0 in that
  field (PE) must also be 1.

> +
> +     return fixed_bits_valid(val, fixed0, fixed1);
> +}
> +
> +static bool nested_host_cr0_valid(struct kvm_vcpu *vcpu, unsigned long val)
> +{
> +     u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed0;
> +     u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed1;
> +
> +     return fixed_bits_valid(val, fixed0, fixed1);
> +}
> +
> +static bool nested_cr4_valid(struct kvm_vcpu *vcpu, unsigned long val)
> +{
> +     u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr4_fixed0;
> +     u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr4_fixed1;
> +
> +     return fixed_bits_valid(val, fixed0, fixed1);
> +}
> +
> +/* No difference in the restrictions on guest and host CR4 in VMX operation. 
> */
> +#define nested_guest_cr4_valid       nested_cr4_valid
> +#define nested_host_cr4_valid        nested_cr4_valid

We should use cr0 and cr4 checks also in handle_vmon().

I've applied this series to kvm/queue for early testing.
Please send replacement patch or patch(es) on top of this series.

Thanks.

Reply via email to