On Tue, 25 Aug 2020 08:53:25 -0700 Joe Perches <j...@perches.com> wrote:
> > The print buffer is statically allocated and managed using code borrowed > > from __ftrace_trace_stack() and is limited to 256 bytes (four of these > > are allocated per CPU to handle the various contexts); anything larger > > will be truncated. > > There is an effort to avoid using trace_printk and the like > so perhaps this feature should have the same compile time > guard. No, this is fine for being in a production kernel. Basically, these are simply debug printk()s that can also be put into the trace buffer. The key difference to trace_printk() is that they are an event that needs to be enabled to write into the buffer. The problem I'm avoiding with not letting people add trace_printk() is that trace_printk() should be used when a developer is trying to debug some code. The *only* trace_printk()s should be the ones that developer adds (because it only shows what they want to find). trace_printk()s are enabled by default, and they have a special switch to disable. But it is an all or nothing switch. They either enable all of them, or disable all of them. No in between. Now if we allow trace_printk()s to be scattered through the kernel, when someone wants to use trace_printk() for their own debugging, and it is turned on, now the trace buffer is filled with a bunch of "noise" from all these other trace_printk()s that are scattered around, and the trace_printk()s that the poor developer added are lost among the sea of other trace_printk()s, making their trace_printk()s useless. That is the reason I try hard not letting trace_printk() enter into the kernel. -- Steve