Hi Nilay,

>>  static int perf_event_read(struct perf_event *event, bool group)
>>  {
>> -       int ret = 0;
>> +       int ret = 0, cpu_to_read;
>>
>> -       /*
>> -        * If event is enabled and currently active on a CPU, update the
>> -        * value in the event structure:
>> -        */
>> -       if (event->state == PERF_EVENT_STATE_ACTIVE) {
>> +       cpu_to_read = find_cpu_to_read(event);
>> +
>> +       if (cpu_to_read >= 0) {
>>                 struct perf_read_data data = {
>>                         .event = event,
>>                         .group = group,
>>                         .ret = 0,
>>                 };
>> -               ret = smp_call_function_single(event->oncpu,
>> +               ret = smp_call_function_single(cpu_to_read,
>>                                                __perf_event_read, &data,
>> 1);
>>                 ret = ret ? : data.ret;
>>         } else if (event->state == PERF_EVENT_STATE_INACTIVE) {
>>
>
> I would like to suggest a small change to this patch.  I think the check on
> PERF_EVENT_STATE_ACTIVE should be retained in the perf_event_read()
> function.  The new function should assume that the event is active.  I find
> this more readable since the next check in function perf_event_read() is on
> PERF_EVENT_STATE_INACTIVE.

Two oncoming flags that Intel CQM/CMT will use are meant to allow read
even if event is inactive. This makes sense in CQM/CMT because the hw
RMID is always reserved. I am ok with keeping the check for
STATE_ACTIVE until those flags are actually introduced, tough.

Reply via email to