On 15.05.2024 11:14, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/cpu/vpmu_amd.c
> +++ b/xen/arch/x86/cpu/vpmu_amd.c
> @@ -290,7 +290,7 @@ static int cf_check amd_vpmu_save(struct vcpu *v,  bool 
> to_guest)
>      context_save(v);
>  
>      if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
> -         is_msr_bitmap_on(vpmu) )
> +         is_msr_bitmap_on(vpmu) && cpu_has_svm )
>          amd_vpmu_unset_msr_bitmap(v);

Assuming the change in the earlier patch can actually be established to be
safe, along the lines of an earlier comment from Stefano the addition may
want to move earlier in the overall conditionals (here and below). In fact
I wonder whether it wouldn't be neater to have

#define is_svm_vcpu(v) (cpu_has_svm && is_hvm_vcpu(v))

at the top of the file, and then use that throughout to replace is_hvm_vcpu().
Same on the Intel side then, obviously.

> @@ -288,7 +288,7 @@ static int cf_check core2_vpmu_save(struct vcpu *v, bool 
> to_guest)
>  
>      /* Unset PMU MSR bitmap to trap lazy load. */
>      if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
> -         cpu_has_vmx_msr_bitmap )
> +         cpu_has_vmx && cpu_has_vmx_msr_bitmap )

Aren't you elsewhere adding IS_ENABLED() to cpu_has_vmx_*, rendering this (and
similar changes further down) redundant?

Jan

Reply via email to