Greg Stark wrote:
> 
> That's what I want to believe. But picture if you have, say a
> 1-terabyte table which is 50% dead tuples and you don't have a spare
> 1-terabytes to rewrite the whole table.

Could one hypothetically do
   update bigtable set pk = pk where ctid in (select ctid from bigtable order 
by ctid desc limit 100);
   vacuum;
and repeat until max(ctid) is small enough?

Sure, it'll take longer than vacuum full; but at first glance
it seems lightweight enough to do even on a live, heavily accessed
table.

IIRC I tried something like this once, and it worked to some extent,
but after a few loops didn't shrink the table as much as I had expected.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to