so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pry...@telsasoft.com>
napsal:

> On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
> > so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pry...@telsasoft.com>
> napsal:
> >
> > > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> > > > today I worked on postgres's server used for critical service.
> Because the
> > > > application is very specific, we had to do final tuning on production
> > > > server. I fix lot of queries, but I am not able to detect fast
> queries that
> > > > does full scan of middle size tables - to 1M rows. Surely I wouldn't
> log
> > > > all queries. Now, there are these queries with freq 10 per sec.
> > > >
> > > > Can be nice to have a possibility to set a log of  queries that do
> full
> > > > scan and read more tuples than is specified limit or that does full
> scan of
> > > > specified tables.
> > > >
> > > > What do you think about the proposed feature?
> > >
> > > Are you able to use auto_explain with auto_explain.log_min_duration ?
> >
> > Unfortunately,  I cannot use it. This server executes 5K queries per
> > seconds, and I am afraid to decrease log_min_duration.
> >
> > The logs are forwarded to the network and last time, when users played
> with
> > it, then they had problems with the network.
> ..
> > The fullscan of this table needs about 30ms and has 200K rows. So
> > decreasing log_min_duration to this value is very risky.
>
> auto_explain.sample_rate should allow setting a sufficiently low value of
> log_min_duration.  It exists since v9.6.
>
>
It cannot help - these queries are executed a few times per sec. In same
time this server execute 500 - 1000 other queries per sec

Regards

Pavel


> --
> Justin
>

Reply via email to