On 23/09/20 23:53, Sean Christopherson wrote:
> Reset the MMU context during kvm_set_cr4() if SMAP or PKE is toggled.
> Recent commits to (correctly) not reload PDPTRs when SMAP/PKE are
> toggled inadvertantly skipped the MMU context reset due to the mask
> of bits that triggers PDPTR loads also being used to trigger MMU context
> resets.
> 
> Fixes: 427890aff855 ("kvm: x86: Toggling CR4.SMAP does not load PDPTEs in PAE 
> mode")
> Fixes: cb957adb4ea4 ("kvm: x86: Toggling CR4.PKE does not load PDPTEs in PAE 
> mode")
> Cc: Jim Mattson <jmatt...@google.com>
> Cc: Peter Shier <psh...@google.com>
> Cc: Oliver Upton <oup...@google.com>
> Signed-off-by: Sean Christopherson <sean.j.christopher...@intel.com>
> ---
>  arch/x86/kvm/x86.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 17f4995e80a7..fd0da41bc149 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -977,6 +977,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>       unsigned long old_cr4 = kvm_read_cr4(vcpu);
>       unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
>                                  X86_CR4_SMEP;
> +     unsigned long mmu_role_bits = pdptr_bits | X86_CR4_SMAP | X86_CR4_PKE;
>  
>       if (kvm_valid_cr4(vcpu, cr4))
>               return 1;
> @@ -1004,7 +1005,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long 
> cr4)
>       if (kvm_x86_ops.set_cr4(vcpu, cr4))
>               return 1;
>  
> -     if (((cr4 ^ old_cr4) & pdptr_bits) ||
> +     if (((cr4 ^ old_cr4) & mmu_role_bits) ||
>           (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
>               kvm_mmu_reset_context(vcpu);
>  
> 

Queued, thanks.

Paolo

Reply via email to