On 3/25/2025 1:31 AM, Mingwei Zhang wrote: > From: Dapeng Mi <dapeng1...@linux.intel.com> > > Mediated vPMU needs to intercept EVENTSELx and FIXED_CNTR_CTRL MSRs to > filter out guest malicious perf events. Either writing these MSRs or > updating event filters would call reprogram_counter() eventually. Thus > check if the guest event should be filtered out in reprogram_counter(). > If so, clear corresponding EVENTSELx MSR or FIXED_CNTR_CTRL field to > ensure the guest event won't be really enabled at vm-entry. > > Besides, mediated vPMU intercepts the MSRs of these guest not owned > counters and it just needs simply to read/write from/to pmc->counter. > > Suggested-by: Sean Christopherson <sea...@google.com> > Signed-off-by: Dapeng Mi <dapeng1...@linux.intel.com> > Co-developed-by: Mingwei Zhang <mizh...@google.com> > Signed-off-by: Mingwei Zhang <mizh...@google.com> > --- > arch/x86/kvm/pmu.c | 27 +++++++++++++++++++++++++++ > arch/x86/kvm/pmu.h | 3 +++ > 2 files changed, 30 insertions(+) > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > index 63143eeb5c44..e9100dc49fdc 100644 > --- a/arch/x86/kvm/pmu.c > +++ b/arch/x86/kvm/pmu.c > @@ -305,6 +305,11 @@ static void pmc_update_sample_period(struct kvm_pmc *pmc) > > void pmc_write_counter(struct kvm_pmc *pmc, u64 val) > { > + if (kvm_mediated_pmu_enabled(pmc->vcpu)) { > + pmc->counter = val & pmc_bitmask(pmc); > + return; > + } > + > /* > * Drop any unconsumed accumulated counts, the WRMSR is a write, not a > * read-modify-write. Adjust the counter value so that its value is > @@ -455,6 +460,28 @@ static int reprogram_counter(struct kvm_pmc *pmc) > bool emulate_overflow; > u8 fixed_ctr_ctrl; > > + if (kvm_mediated_pmu_enabled(pmu_to_vcpu(pmu))) { > + bool allowed = check_pmu_event_filter(pmc); > + > + if (pmc_is_gp(pmc)) { > + if (allowed) > + pmc->eventsel_hw |= pmc->eventsel & > + > ARCH_PERFMON_EVENTSEL_ENABLE; > + else > + pmc->eventsel_hw &= > ~ARCH_PERFMON_EVENTSEL_ENABLE; > + } else { > + int idx = pmc->idx - KVM_FIXED_PMC_BASE_IDX; > + > + if (allowed) > + pmu->fixed_ctr_ctrl_hw = pmu->fixed_ctr_ctrl;
Sean, just found there is a potential bug here. The "pmu->fixed_ctr_ctrl_hw" should not be assigned to "pmu->fixed_ctr_ctrl" here, otherwise the other filtered fixed counter (not this allowed fixed counter) could be enabled accidentally. diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index ba9d336f1d1d..f32e5f66f73b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -473,7 +473,8 @@ static int reprogram_counter(struct kvm_pmc *pmc) int idx = pmc->idx - KVM_FIXED_PMC_BASE_IDX; if (allowed) - pmu->fixed_ctr_ctrl_hw = pmu->fixed_ctr_ctrl; + pmu->fixed_ctr_ctrl_hw |= pmu->fixed_ctr_ctrl & + intel_fixed_bits_by_idx(idx, 0xf); else pmu->fixed_ctr_ctrl_hw &= ~intel_fixed_bits_by_idx(idx, 0xf); > + else > + pmu->fixed_ctr_ctrl_hw &= > + ~intel_fixed_bits_by_idx(idx, 0xf); > + } > + > + return 0; > + } > + > emulate_overflow = pmc_pause_counter(pmc); > > if (!pmc_event_is_allowed(pmc)) > diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h > index 509c995b7871..6289f523d893 100644 > --- a/arch/x86/kvm/pmu.h > +++ b/arch/x86/kvm/pmu.h > @@ -113,6 +113,9 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) > { > u64 counter, enabled, running; > > + if (kvm_mediated_pmu_enabled(pmc->vcpu)) > + return pmc->counter & pmc_bitmask(pmc); > + > counter = pmc->counter + pmc->emulated_counter; > > if (pmc->perf_event && !pmc->is_paused)