I'm so sorry -- I meant to give the version, of course. It's 9.6.13.

Thanks,
Walter


On Mon, May 20, 2019 at 6:05 PM Tom Lane <t...@sss.pgh.pa.us> wrote:

> Walter Smith <wal...@carezone.com> writes:
> > Today we deleted about 15 million rows in one transaction from this
> table.
> > Immediately afterwards, a particular SELECT started running very slowly
> --
> > 500 to 3000 ms rather than the usual <1ms.
>
> > We did an EXPLAIN ANALYZE on this select and it was still doing an index
> > scan as usual. The *planning time* for the query is what had gotten slow.
> > The query itself was still executing in <1ms.
>
> > Over the next few hours the time slowly improved, until it returned to
> the
> > former performance. You can see a graph at https://imgur.com/a/zIfqkF5.
>
> Were the deleted rows all at one end of the index in question?
>
> If so, this is likely down to the planner trying to use the index to
> identify the extremal live value of the column, which it wants to know
> in connection with planning mergejoins (so I'm assuming your problem
> query involved a join on the indexed column --- whether or not the
> final plan did a mergejoin, the planner would consider this).  As
> long as there's a live value near the end of the index, this is pretty
> cheap.  If the planner has to trawl through a bunch of dead entries
> to find the nearest-to-the-end live one, not so much.
>
> Subsequent vacuuming would eventually delete the dead index entries
> and return things to normal; although usually the performance comes
> back all-of-a-sudden at the next (auto)VACUUM of the table.   So I'm
> a bit intrigued by your seeing it "gradually" improve.  Maybe you
> had old open transactions that were limiting VACUUM's ability to
> remove rows?
>
> We've made a couple of rounds of adjustments of the behavior to try
> to avoid/reduce this penalty, but since you didn't say what PG version
> you're running, it's hard to tell whether an upgrade would help.
>
>                         regards, tom lane
>

Reply via email to