On 07/27/2016 12:07 AM, Tom Lane wrote: > >> 4. Now, update that small table 500 times per second. >> That's a recipe for runaway table bloat; VACUUM can't do much because >> there's always some minutes-old transaction hanging around (and SNAPSHOT >> TOO OLD doesn't really help, we're talking about minutes here), and >> because of all of the indexes HOT isn't effective. > Hm, I'm not following why this is a disaster. OK, you have circa 100% > turnover of the table in the lifespan of the slower transactions, but I'd > still expect vacuuming to be able to hold the bloat to some small integer > multiple of the minimum possible table size. (And if the table is small, > that's still small.) I suppose really long transactions (pg_dump?) could > be pretty disastrous, but there are ways around that, like doing pg_dump > on a slave. Is there any theoretical obstacle which would make it impossible to teach VACUUM not to hold back the whole vacuum horizon, but just to leave a single transaction alone in case of a long-running REPEATABLE READ transaction ?
-- Hannu Krosing PostgreSQL Consultant Performance, Scalability and High Availability 2ndQuadrant Nordic Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers