Hi Mao, On Tue, Jun 4, 2019 at 10:25 AM Mao Han <han_...@c-sky.com> wrote: > > The csky pmu counter may have different io width. When the counter is > smaller then 64 bits and counter value is smaller than the old value, it > will result to a extremely large delta value. So the sampled value should > be extend to 64 bits to avoid this, the extension bits base on the > count-width property from dts. > > Signed-off-by: Mao Han <han_...@c-sky.com> > Cc: Guo Ren <guo...@kernel.org> > --- > arch/csky/kernel/perf_event.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/arch/csky/kernel/perf_event.c b/arch/csky/kernel/perf_event.c > index c022acc..36f7f20 100644 > --- a/arch/csky/kernel/perf_event.c > +++ b/arch/csky/kernel/perf_event.c > @@ -9,6 +9,7 @@ > #include <linux/platform_device.h> > > #define CSKY_PMU_MAX_EVENTS 32 > +#define DEFAULT_COUNT_WIDTH 48 > > #define HPCR "<0, 0x0>" /* PMU Control reg */ > #define HPCNTENR "<0, 0x4>" /* Count Enable reg */ > @@ -18,6 +19,7 @@ static void > (*hw_raw_write_mapping[CSKY_PMU_MAX_EVENTS])(uint64_t val); > > struct csky_pmu_t { > struct pmu pmu; > + uint32_t count_width; > uint32_t hpcr; > } csky_pmu; > > @@ -806,7 +808,12 @@ static void csky_perf_event_update(struct perf_event > *event, > struct hw_perf_event *hwc) > { > uint64_t prev_raw_count = local64_read(&hwc->prev_count); > - uint64_t new_raw_count = hw_raw_read_mapping[hwc->idx](); > + /* > + * Sign extend count value to 64bit, otherwise delta calculation > + * would be incorrect when overflow occurs. > + */ > + uint64_t new_raw_count = sign_extend64( > + hw_raw_read_mapping[hwc->idx](), > csky_pmu.count_width); csky_pmu.count_width - 1 ? we need index here.
Best Regards Guo Ren