On Mon, May 30, 2016 at 11:05:17AM -0700, Jeff Janes wrote:
> So my theory is that you deleted a huge number of entries off from
> either end of the index, that transaction committed, and that commit
> became visible to all.  Planning a mergejoin needs to dig through all
> those tuples to probe the true end-point.  On master, the index
> entries quickly get marked as LP_DEAD so future probes don't have to
> do all that work, but on the replicas those index hint bits are, for
> some unknown to me reason, not getting set.  So it has to scour the
> all the heap pages which might have the smallest/largest tuple, on
> every planning cycle, and that list of pages is very large leading to
> occasional IO stalls.

This I get, but why was the same backend reading data for all 3 largest
tables, while I know for sure (well, 99.9% sure) that no query touches
all of them?

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
                                                             http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to