Hi Ganapat,
Sorry for the delay in replying; I was away most of last week.

On Tue, May 15, 2018 at 04:03:19PM +0530, Ganapatrao Kulkarni wrote:
> On Sat, May 5, 2018 at 12:16 AM, Ganapatrao Kulkarni <gklkm...@gmail.com> 
> wrote:
> > On Thu, Apr 26, 2018 at 4:29 PM, Mark Rutland <mark.rutl...@arm.com> wrote:
> >> On Wed, Apr 25, 2018 at 02:30:47PM +0530, Ganapatrao Kulkarni wrote:

> >>> +static int alloc_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore)
> >>> +{
> >>> +     int counter;
> >>> +
> >>> +     raw_spin_lock(&pmu_uncore->lock);
> >>> +     counter = find_first_zero_bit(pmu_uncore->counter_mask,
> >>> +                             pmu_uncore->uncore_dev->max_counters);
> >>> +     if (counter == pmu_uncore->uncore_dev->max_counters) {
> >>> +             raw_spin_unlock(&pmu_uncore->lock);
> >>> +             return -ENOSPC;
> >>> +     }
> >>> +     set_bit(counter, pmu_uncore->counter_mask);
> >>> +     raw_spin_unlock(&pmu_uncore->lock);
> >>> +     return counter;
> >>> +}
> >>> +
> >>> +static void free_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore,
> >>> +                                     int counter)
> >>> +{
> >>> +     raw_spin_lock(&pmu_uncore->lock);
> >>> +     clear_bit(counter, pmu_uncore->counter_mask);
> >>> +     raw_spin_unlock(&pmu_uncore->lock);
> >>> +}
> >>
> >> I don't believe that locking is required in either of these, as the perf
> >> core serializes pmu::add() and pmu::del(), where these get called.
> 
> without this locking, i am seeing "BUG: scheduling while atomic" when
> i run perf with more events together than the maximum counters
> supported

Did you manage to get to the bottom of this?

Do you have a backtrace?

It looks like in your latest posting you reserve counters through the
userspace ABI, which doesn't seem right to me, and I'd like to
understand the problem.

Thanks,
Mark.

Reply via email to