On Thu, Mar 14, 2019 at 07:29:17AM +0000, Stephan Schmidt wrote:
> i’m currently working on a high Performance Database and want to make sure 
> that whenever there are slow queries during regular operations i’ve got all 
> Information about the query in my logs. So auto_explain come to mind, but the 
> documentation explicitly states that it Comes at a cost. My Question is, how 
> big is the latency added by auto_explain in percentage or ms ?

https://www.postgresql.org/docs/current/auto-explain.html
|log_analyze
...
|When this parameter is on, per-plan-node timing occurs for all statements 
executed, whether or not they run long enough to actually get logged. This can 
have an extremely negative impact on performance. Turning off 
auto_explain.log_timing ameliorates the performance cost, at the price of 
obtaining less information.

|auto_explain.log_timing (boolean)
|auto_explain.log_timing controls whether per-node timing information is 
printed when an execution plan is logged; it's equivalent to the TIMING option 
of EXPLAIN. The overhead of repeatedly reading the system clock can slow down 
queries significantly on some systems, so it may be useful to set this 
parameter to off when only actual row counts, and not exact times, are needed. 
This parameter has no effect unless auto_explain.log_analyze is enabled. This 
parameter is on by default. Only superusers can change this setting.

I believe the cost actually varies significantly with the type of plan "node",
with "nested loops" incurring much higher overhead.

I think you could compare using explain(analyze) vs explain(analyze,timing
off).  While you're at it, compare without explain at all.

I suspect the overhead is inconsequential if you set log_timing=off and set
log_min_duration such that only the slowest queries are logged.

Then, you can manually run "explain (analyze,costs on)" on any problematic
queries to avoid interfering with production clients.

Justin

Reply via email to