On Thu, Apr 27, 2017 at 5:22 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> But if we delete many rows from beginning or end of index, it would be
>>> very expensive too because we will fetch each dead row and reject it.
>
>> Yep, and I've seen that turn into a serious problem in production.
>
> How so?  Shouldn't the indexscan go back and mark such tuples dead in
> the index, such that they'd be visited this way only once?  If that's
> not happening, maybe we should try to fix it.

Hmm.  Actually, I think the scenario I saw was where there was a large
number of tuples at the end of the index that weren't dead yet due to
an old snapshot held open.  That index was being scanned by lots of
short-running queries.  Those queries executed just fine, but they
took a long to plan because they had to step over all of the dead
tuples in the index one by one.  That increased planning time -
multiplied by the number of times it was incurred - was sufficient to
cripple the system.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to