On Mon, 9 Jul 2018 16:53:52 +0200
Claudio <claudio.font...@gliwa.com> wrote:

> 
> One additional data point, based on brute force again:
> 
> I applied this change, in order to understand if it was the
> 
> trace_event_raw_event_* (I suppose primarily trace_event_raw_event_switch)
> 
> that contained the latency "offenders":
> 
> diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
> index 4ecdfe2..969467d 100644
> --- a/include/trace/trace_events.h
> +++ b/include/trace/trace_events.h
> @@ -704,6 +704,8 @@ trace_event_raw_event_##call(void *__data, proto)         
>   
>         struct trace_event_raw_##call *entry;                           \
>         int __data_size;                                                \
>                                                                         \
> +       return;                                                         \
> +                                                                       \
>         if (trace_trigger_soft_disabled(trace_file))                    \
>                 return;                                                 \
>                                                                         \
> 
> 
> This reduces the latency overhead to 6% down from 25%.
> 
> Maybe obvious? Wanted to share in case it helps, and will dig further.

I noticed that just disabling tracing "echo 0 > tracing_on" is very
similar. I'm now recording timings of various parts of the code. But at
most I've seen is a 12us, which should not add the overhead. So it's
triggering something else.

I'll be going on PTO next week, and there's things I must do this week,
thus I may not have much more time to look into this until I get back
from PTO (July 23rd).

-- Steve

Reply via email to