On 30.11.2021 21:56, Andrew Cooper wrote:
> On 29/11/2021 09:10, Jan Beulich wrote:
>> @@ -133,14 +133,13 @@ int vpmu_do_msr(unsigned int msr, uint64
>>           goto nop;
>>  
>>      vpmu = vcpu_vpmu(curr);
>> -    ops = vpmu->arch_vpmu_ops;
>> -    if ( !ops )
>> +    if ( !vpmu_is_set(vpmu, VPMU_INITIALIZED) )
>>          goto nop;
>>  
>> -    if ( is_write && ops->do_wrmsr )
>> -        ret = ops->do_wrmsr(msr, *msr_content, supported);
>> -    else if ( !is_write && ops->do_rdmsr )
>> -        ret = ops->do_rdmsr(msr, msr_content);
>> +    if ( is_write && vpmu_ops.do_wrmsr )
>> +        ret = alternative_call(vpmu_ops.do_wrmsr, msr, *msr_content, 
>> supported);
>> +    else if ( !is_write && vpmu_ops.do_rdmsr )
>> +        ret = alternative_call(vpmu_ops.do_rdmsr, msr, msr_content);
> 
> Elsewhere, you've dropped the function pointer NULL checks.  Why not here?

No, I'm not dropping any function pointer checks here; all I drop is
checks of the ops pointer being NULL. These checks all get dropped in
patch 3.

>> --- a/xen/include/asm-x86/vpmu.h
>> +++ b/xen/include/asm-x86/vpmu.h
>> @@ -61,25 +61,25 @@ struct vpmu_struct {
>>      u32 hw_lapic_lvtpc;
>>      void *context;      /* May be shared with PV guest */
>>      void *priv_context; /* hypervisor-only */
>> -    const struct arch_vpmu_ops *arch_vpmu_ops;
>>      struct xen_pmu_data *xenpmu_data;
>>      spinlock_t vpmu_lock;
>>  };
>>  
>>  /* VPMU states */
>> -#define VPMU_CONTEXT_ALLOCATED              0x1
>> -#define VPMU_CONTEXT_LOADED                 0x2
>> -#define VPMU_RUNNING                        0x4
>> -#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
>> -#define VPMU_FROZEN                         0x10  /* Stop counters while 
>> VCPU is not running */
>> -#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
>> +#define VPMU_INITIALIZED                    0x1
>> +#define VPMU_CONTEXT_ALLOCATED              0x2
>> +#define VPMU_CONTEXT_LOADED                 0x4
>> +#define VPMU_RUNNING                        0x8
>> +#define VPMU_CONTEXT_SAVE                   0x10  /* Force context save */
>> +#define VPMU_FROZEN                         0x20  /* Stop counters while 
>> VCPU is not running */
>> +#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x40
>>  /* PV(H) guests: VPMU registers are accessed by guest from shared page */
>> -#define VPMU_CACHED                         0x40
>> -#define VPMU_AVAILABLE                      0x80
>> +#define VPMU_CACHED                         0x80
>> +#define VPMU_AVAILABLE                      0x100
>>  
>>  /* Intel-specific VPMU features */
>> -#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
>> -#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store 
>> */
>> +#define VPMU_CPU_HAS_DS                     0x1000 /* Has Debug Store */
>> +#define VPMU_CPU_HAS_BTS                    0x2000 /* Has Branch Trace 
>> Store */
> 
> Seeing as you're shuffling each of these, how about adding some leading
> 0's for alignment?

Fine with me; I did consider it at the time of writing the patch,
but decided that such a change of non-mandatory style may not be
justified here (or even in general), as there are also downsides
to such padding: Once adding a constant with more significant
digits, all pre-existing ones need touching to insert yet another
zero.

Jan


Reply via email to