>> I'll try reindexing each of the tables just to make sure it's not strange 
>> index imbalance or something causing the issue.
> I seen this issue few time - and reindex helps.

Alas our reindex doesn’t seem to have helped. I’m going to see if we can 
reproduce this on a non-prod environment so we can muck about a bit more. If we 
can reproduce it in a safe place, is there a tool we can use to get more info 
out of the query planner to find what it’s doing to take so long?

Regards,

David

From: Pavel Stehule <pavel.steh...@gmail.com>
Date: Monday, 4 November 2019 at 4:53 pm
To: David Wheeler <dwhee...@dgitsystems.com>
Cc: Tom Lane <t...@sss.pgh.pa.us>, "pgsql-performance@lists.postgresql.org" 
<pgsql-performance@lists.postgresql.org>, Cameron Redpath 
<credp...@dgitsystems.com>
Subject: Re: Slow planning, fast execution for particular 3-table query



po 4. 11. 2019 v 6:17 odesílatel David Wheeler 
<dwhee...@dgitsystems.com<mailto:dwhee...@dgitsystems.com>> napsal:
>    To see this issue, you have to have recently
>    inserted or deleted a bunch of extremal values of the indexed join-key
>    column.  And the problem only persists until those values become known
>    committed-good, or known dead-to-everybody.  (Maybe you've got a
>    long-running transaction somewhere, postponing the dead-to-everybody
>    condition?)

There are no long-running transactions that have backend_xmin set in 
pg_stat_activity, if that's what you mean here. There are also no open prepared 
transactions or replication slots which I understand have a similar 
keeping-things-alive issue.

These tables are biggish (hundreds of mb), but not changing so frequently that 
I'd expect large quantities of data to be inserted or deleted before autovac 
can get in there and clean it up. And certainly not in a single uncommitted 
transaction.

I'll try reindexing each of the tables just to make sure it's not strange index 
imbalance or something causing the issue.

I seen this issue few time - and reindex helps.

Pavel


Regards,

David

On 4/11/19, 4:01 pm, "Tom Lane" <t...@sss.pgh.pa.us<mailto:t...@sss.pgh.pa.us>> 
wrote:

    David Wheeler <dwhee...@dgitsystems.com<mailto:dwhee...@dgitsystems.com>> 
writes:
    > We’re having trouble working out why the planning time for this
    > particular query is slow (~2.5s vs 0.9ms execution time). As you can see
    > below, there are only 3 tables involved so it’s hard to imagine what
    > decisions the planner has to make that take so long.

    I wonder whether this traces to the cost of trying to estimate the
    largest/smallest value of an indexed column by looking into the index.
    Normally that's pretty cheap, but if you have a lot of recently-inserted
    or recently-deleted values at the end of the index, it can get painful.
    AFAIR this only happens for columns that are equijoin keys, so the fact
    that your query is a join is significant.

    I'm not convinced that this is the problem, because it's a corner case
    that few people hit.  To see this issue, you have to have recently
    inserted or deleted a bunch of extremal values of the indexed join-key
    column.  And the problem only persists until those values become known
    committed-good, or known dead-to-everybody.  (Maybe you've got a
    long-running transaction somewhere, postponing the dead-to-everybody
    condition?)

    > Postgres version 9.5.19

    If this *is* the cause, v11 and up have a performance improvement that
    you need:

    https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3

                        regards, tom lane

Reply via email to