On Thu, Apr 06, 2017 at 12:42:40PM -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)" <rost...@goodmis.org>
> 
> There are certain parts of the kernel that can not let stack tracing
> proceed (namely in RCU), because the stack tracer uses RCU, and parts of RCU
> internals can not handle having RCU read side locks taken.
> 
> Add stack_tracer_disable() and stack_tracer_enable() functions to let RCU
> stop stack tracing on the current CPU as it is in those critical sections.

s/as it is in/when it is in/?

> Signed-off-by: Steven Rostedt (VMware) <rost...@goodmis.org>

One quibble above, one objection below.

                                                        Thanx, Paul

> ---
>  include/linux/ftrace.h     |  6 ++++++
>  kernel/trace/trace_stack.c | 28 ++++++++++++++++++++++++++++
>  2 files changed, 34 insertions(+)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index ef7123219f14..40afee35565a 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -286,6 +286,12 @@ int
>  stack_trace_sysctl(struct ctl_table *table, int write,
>                  void __user *buffer, size_t *lenp,
>                  loff_t *ppos);
> +
> +void stack_tracer_disable(void);
> +void stack_tracer_enable(void);
> +#else
> +static inline void stack_tracer_disable(void) { }
> +static inline void stack_tracer_enabe(void) { }
>  #endif
> 
>  struct ftrace_func_command {
> diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
> index 05ad2b86461e..5adbb73ec2ec 100644
> --- a/kernel/trace/trace_stack.c
> +++ b/kernel/trace/trace_stack.c
> @@ -41,6 +41,34 @@ static DEFINE_MUTEX(stack_sysctl_mutex);
>  int stack_tracer_enabled;
>  static int last_stack_tracer_enabled;
> 
> +/**
> + * stack_tracer_disable - temporarily disable the stack tracer
> + *
> + * There's a few locations (namely in RCU) where stack tracing
> + * can not be executed. This function is used to disable stack
> + * tracing during those critical sections.
> + *
> + * This function will disable preemption. stack_tracer_enable()
> + * must be called shortly after this is called.
> + */
> +void stack_tracer_disable(void)
> +{
> +     preempt_disable_notrace();

Interrupts are disabled in all current call points, so you don't really
need to disable preemption.  I would normally not worry, given the
ease-of-use improvements, but some people get annoyed about even slight
increases in idle-entry overhead.

> +     this_cpu_inc(trace_active);
> +}
> +
> +/**
> + * stack_tracer_enable - re-enable the stack tracer
> + *
> + * After stack_tracer_disable() is called, stack_tracer_enable()
> + * must shortly be called afterward.
> + */
> +void stack_tracer_enable(void)
> +{
> +     this_cpu_dec(trace_active);
> +     preempt_enable_notrace();

Ditto...

> +}
> +
>  void stack_trace_print(void)
>  {
>       long i;
> -- 
> 2.10.2
> 
> 

Reply via email to