Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

2022-09-26 Thread Juergen Gross
On 26.09.22 22:09, Boris Ostrovsky wrote: On 9/26/22 10:18 AM, Juergen Gross wrote:   bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err)   {   if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) { -    if (is_amd_pmu_msr(msr)) { -    if (!xen_amd_pmu_emulate(msr, val, 1)

Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

2022-09-26 Thread Boris Ostrovsky
On 9/26/22 10:18 AM, Juergen Gross wrote: bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err) { if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) { - if (is_amd_pmu_msr(msr)) { - if (!xen_amd_pmu_emulate(msr, val, 1)) -

Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

2022-09-26 Thread Juergen Gross
On 26.09.22 17:29, Jan Beulich wrote: On 26.09.2022 16:18, Juergen Gross wrote: Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants of read/write MSR in case the MSR access isn't emulated via Xen. Allow the caller to select the potentially faulting variant by passing NULL for

Re: [PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

2022-09-26 Thread Jan Beulich
On 26.09.2022 16:18, Juergen Gross wrote: > Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants > of read/write MSR in case the MSR access isn't emulated via Xen. Allow > the caller to select the potentially faulting variant by passing NULL > for the error pointer. Maybe make t

[PATCH 1/3] xen/pv: allow pmu msr accesses to cause GP

2022-09-26 Thread Juergen Gross
Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants of read/write MSR in case the MSR access isn't emulated via Xen. Allow the caller to select the potentially faulting variant by passing NULL for the error pointer. Remove one level of indentation by restructuring the code a li