Hi,

I have an application where I drop, recreate, reload, and recreate
indexes on a 1 million row table each day. I do this to avoid having to
run vacuum on the table in the case where I might use DELETE or UPDATEs
on deltas. 

It seems that running vacuum still has value in the above approach
because I still see index row versions were removed. I do not explicitly
drop the indexes because they are dropped with the table.

In considering the use of TRUNCATE I sill have several indexes that if
left in place would slow down the data load.

My question is, what is the best way to manage a large table that gets
reloaded each day?

Drop
Create Table
Load (copy or insert/select)
Create Indexes
Vacuum anyway?

Or...

DROP indexes
Truncate
Load (copy or insert/select)
Create Indexes

And is vacuum still going to be needed?

Many Thanks,
Mike



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to