čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nag...@sraoss.co.jp> napsal:

> On Thu, 27 Feb 2020 06:27:24 +0100
> Pavel Stehule <pavel.steh...@gmail.com> wrote:
>
> > čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <
> horikyota....@gmail.com>
> > napsal:
> >
> > > Hello.
> > >
> > > At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nag...@sraoss.co.jp>
> > > wrote in
> > > > On Wed, 26 Feb 2020 18:51:21 +0100
> > > > Julien Rouhaud <rjuju...@gmail.com> wrote:
> > > >
> > > > > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:
> > > > > > Hi,
> > > > > >
> > > > > > Attached is a patch for allowing auto_explain to log plans before
> > > > > > queries are executed.
> > > > > >
> > > > > > Currently, auto_explain logs plans only after query executions,
> > > > > > so if a query gets stuck its plan could not be logged. If we can
> > > > > > know plans of stuck queries, we may get some hints to resolve the
> > > > > > stuck. This is useful when you are testing and debugging your
> > > > > > application whose queries get stuck in some situations.
> > > > >
> > > > > Indeed that could be useful.
> > > >
> > > > > > This patch adds  new option log_before_query to auto_explain.
> > > > >
> > > > > Maybe "log_before_execution" would be better?
> > > >
> > > > Thanks!  This seems better also to me.
> > > >
> > > > >
> > > > > > Setting auto_explain.log_before_query option logs all plans
> before
> > > > > > queries are executed regardless of auto_explain.log_min_duration
> > > > > > unless this is set -1 to disable logging.  If log_before_query is
> > > > > > enabled, only duration time is logged after query execution as in
> > > > > > the case of when both log_statement and
> log_min_duration_statement
> > > > > > are enabled.
> > > > >
> > > > > I'm not sure about this behavior.  The final explain plan is
> needed at
> > > least if
> > > > > log_analyze, log_buffers or log_timing are enabled.
> > > >
> > > > In the current patch, log_before_query (will be log_before_execution)
> > > > has no effect if log_analyze is enabled in order to avoid to log the
> > > > same plans twice.  Instead, is it better to log the plan always
> twice,
> > > > before and after the execution, if  log_before_query is enabled
> > > > regardless of log_min_duration or log_analyze?
> > >
> > > Honestly, I don't think showing plans for all queries is useful
> > > behavior.
> > >
> > > If you allow the stuck query to be canceled, showing plan in
> > > PG_FINALLY() block in explain_ExecutorRun would work, which look like
> > > this.
> > >
> > > explain_ExecutorRun()
> > > {
> > >   ...
> > >   PG_TRY();
> > >   {
> > >       ...
> > >       else
> > >          starndard_ExecutorRun();
> > >       nesting_level--;
> > >   }
> > >   PG_CATCH();
> > >   {
> > >       nesting_level--;
> > >
> > >       if (auto_explain_log_failed_plan &&
> > >        <maybe the time elapsed from start exceeds min_duration>)
> > >       {
> > >           'show the plan'
> > >       }
> > >    }
> > > }
> > >
> > > regards.
> > >
> >
> > It can work - but still it is not good enough solution. We need "query
> > debugger" that allows to get some query execution metrics online.
> >
> > There was a problem with memory management for passing plans between
> > processes. Can we used temp files instead shared memory?
>
>  I think "query debugger" feature you proposed is out of scope of
> auto_explain module. I also think the feature to analyze running
> query online is great, but we will need another discussion on a new
> module or eature for it.
>

sure. My note was about using auto_explain like query_debugger. It has not
too sense, and from this perspective, the original proposal to log plan
before execution has more sense.

you can log every plan with higher cost than some constant.



> Regards,
> Yugo Nagata
>
> --
> Yugo NAGATA <nag...@sraoss.co.jp>
>
>
>

Reply via email to