On Thu, Aug 3, 2017 at 7:08 PM, Alexei Starovoitov <a...@fb.com> wrote:
> On 8/3/17 6:29 AM, Yonghong Song wrote:
>>
>> @@ -578,8 +596,9 @@ static void perf_syscall_enter(void *ignore, struct
>> pt_regs *regs, long id)
>>         if (!sys_data)
>>                 return;
>>
>> +       prog = READ_ONCE(sys_data->enter_event->prog);
>>         head = this_cpu_ptr(sys_data->enter_event->perf_events);
>> -       if (hlist_empty(head))
>> +       if (!prog && hlist_empty(head))
>>                 return;
>>
>>         /* get the size after alignment with the u32 buffer size field */
>> @@ -594,6 +613,13 @@ static void perf_syscall_enter(void *ignore, struct
>> pt_regs *regs, long id)
>>         rec->nr = syscall_nr;
>>         syscall_get_arguments(current, regs, 0, sys_data->nb_args,
>>                                (unsigned long *)&rec->args);
>> +
>> +       if ((prog && !perf_call_bpf_enter(prog, regs, sys_data, rec)) ||
>> +           hlist_empty(head)) {
>> +               perf_swevent_put_recursion_context(rctx);
>> +               return;
>> +       }
>
>
> hmm. if I read the patch correctly that makes it different from
> kprobe/uprobe/tracepoints+bpf behavior. Why make it different and
> force user space to perf_event_open() on every cpu?
> In other cases it's the job of the bpf program to filter by cpu
> if necessary and that is well understood by bcc scripts.

The patch actually does allow the bpf program to track all cpus.
The test:
>> +       if (!prog && hlist_empty(head))
>>                 return;
ensures that if prog is not empty, it will not return even if the
event in the current cpu is empty. Later on, perf_call_bpf_enter will
be called if prog is not empty. This ensures that
the bpf program will execute regardless of the current cpu.

Maybe I missed anything here?

Reply via email to