On Tue, May 21, 2019 at 02:40:53PM -0700, kan.li...@linux.intel.com wrote: > From: Kan Liang <kan.li...@linux.intel.com> > > To get correct PERF_METRICS value, the fixed counter 3 must start from > 0. It would bring problems when sampling read slots and topdown events. > For example, > perf record -e '{slots, topdown-retiring}:S' > The slots would not overflow if it starts from 0. > > Add specific validate_group() support to reject the case and error out > for Icelake. > > Signed-off-by: Kan Liang <kan.li...@linux.intel.com> > --- > arch/x86/events/core.c | 2 ++ > arch/x86/events/intel/core.c | 20 ++++++++++++++++++++ > arch/x86/events/perf_event.h | 2 ++ > 3 files changed, 24 insertions(+) > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c > index 07ecfe75f0e6..a7eb842f8651 100644 > --- a/arch/x86/events/core.c > +++ b/arch/x86/events/core.c > @@ -2065,6 +2065,8 @@ static int validate_group(struct perf_event *event) > fake_cpuc->n_events = 0; > ret = x86_pmu.schedule_events(fake_cpuc, n, NULL); > > + if (x86_pmu.validate_group) > + ret = x86_pmu.validate_group(fake_cpuc, n); > out: > free_fake_cpuc(fake_cpuc); > return ret; > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c > index 79e9d05e047d..2bb90d652a35 100644 > --- a/arch/x86/events/intel/core.c > +++ b/arch/x86/events/intel/core.c > @@ -4410,6 +4410,25 @@ static int icl_set_period(struct perf_event *event) > return 1; > } > > +static int icl_validate_group(struct cpu_hw_events *cpuc, int n) > +{ > + bool has_sampling_slots = false, has_metrics = false; > + struct perf_event *e; > + int i; > + > + for (i = 0; i < n; i++) { > + e = cpuc->event_list[i]; > + if (is_slots_event(e) && is_sampling_event(e)) > + has_sampling_slots = true; > + > + if (is_perf_metrics_event(e)) > + has_metrics = true; > + } > + if (unlikely(has_sampling_slots && has_metrics)) > + return -EINVAL; > + return 0; > +}
Why this special hack, why not disallow sampling on SLOTS on creation?