Robert Haas <robertmh...@gmail.com> wrote: > The elephant in the room here is that if the relation is a million > pages of which 1-100,000 and 1,000,000 are in use, no amount of bias > is going to help us truncate the relation unless every tuple on page > 1,000,000 gets updated or deleted. Perhaps bias, combined with a client utility to force non-HOT updates of some rows at the end of the table would provide the desired behavior. (It'd be nice if that could be built in to vacuum, but if it's not feasible, a separate utility is workable.) Off the top of my head, I might set up a routine crontab job for most databases to do that to the lesser of 1% of the rows or a number of rows which matches how far the pages with free space exceed 10% of total pages. That seems like it should contain things for most circumstances without getting too extreme. One could always run the utility manually to correct more extreme bloat. Some sort of delay (similar to what vacuum can do) to prevent tanking performance would be good. We wouldn't care about the occasional update conflict -- we expect that using a relational database means dealing with serialization failures. We'd be more than happy to accept a few of these in exchange for keeping performance optimal. If your software is designed to reschedule transactions which are rolled back for serialization failures, they are just another performance cost, so it's pretty easy to balance one cost against another. -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers