On Mon, May 22, 2017 at 04:55:47PM +0000, Liang, Kan wrote:
> 
> 
> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
> > > 580b60f..e8b2326 100644
> > > --- a/arch/x86/events/core.c
> > > +++ b/arch/x86/events/core.c
> > > @@ -101,6 +101,10 @@ u64 x86_perf_event_update(struct perf_event
> > *event)
> > >   delta = (new_raw_count << shift) - (prev_raw_count << shift);
> > >   delta >>= shift;
> > >
> > > + /* Correct the count number if applying ref_cycles replacement */
> > > + if (!is_sampling_event(event) &&
> > > +     (hwc->flags & PERF_X86_EVENT_REF_CYCLES_REP))
> > > +         delta *= x86_pmu.ref_cycles_factor;
> > 
> > That condition seems wrong, why only correct for !sampling events?
> >
> 
> For sampling, it's either fixed freq mode or fixed period mode.
>  - In the fixed freq mode, we should do nothing, because the adaptive
>    frequency algorithm will handle it.
>  - In the fixed period mode, we have already adjusted the period in 
>     ref_cycles_rep().
> Therefore, we should only handle !sampling events here.

How so? For sampling events the actual event count should also be
accurate.

Reply via email to