On Tue, Sep 1, 2009 at 9:29 PM, Alvaro
Herrera<alvhe...@commandprompt.com> wrote:
> Ron Mayer wrote:
>> Greg Stark wrote:
>> >
>> > That's what I want to believe. But picture if you have, say a
>> > 1-terabyte table which is 50% dead tuples and you don't have a spare
>> > 1-terabytes to rewrite the whole table.
>>
>> Could one hypothetically do
>>    update bigtable set pk = pk where ctid in (select ctid from bigtable 
>> order by ctid desc limit 100);
>>    vacuum;
>> and repeat until max(ctid) is small enough?
>
> I remember Hannu Krosing said they used something like that to shrink
> really bloated tables.  Maybe we should try to explicitely support a
> mechanism that worked in that fashion.  I think I tried it at some point
> and found that the problem with it was that ctid was too limited in what
> it was able to do.

I think a way to incrementally shrink large tables would be enormously
beneficial.   Maybe vacuum could try to do a bit of that each time it
runs.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to