Angva wrote:
Looking for a small bit of advice...

I have a script that updates several tables with large amounts of data.
Before running the updates, it drops all indexes for optimal
performance. When the updates have finished, I run the following
procedure:

recreate the indexes
cluster the tables
vacuum full analyze on the tables

Hi all,

I'd like to see a general way to take indexes off line without actually losing their definitions. For example, something like "ALTER TABLE [EN | DIS] ABLE INDEXES", "ALTER INDEX [EN | DIS] ABLE", etc. This could also be used internally when a backend encounters an error reading/writing an index. Rather than refusing to execute queries, it could just ignore indexes it knows are disabled or bad in some way and re-plan as needed.

This would have two benefits. First, the above scenerio would be much simpler. Rather than dropping and re-creating new indexes, you could just disable and then re-enable them without having any knowledge of their structure. Secondly, it would allow us to put indexes in an alternate table space on a non-redundant volume and, in the case of a drive failure, be able to limp along, and get the system back to normal simply by replacing the disk and issuing a REINDEX command.

I realize there are a couple gotchas with this. For example, what to do with unique indexes? Perhaps a backend would still need to refuse to do update/inserts on a table with degraded unique indexes, unless the index was disabled explicitly? And then, refuse to rebuild/re-enable the index as normal if non-unique values found?


Thx for considering :-)

-Glen Parker

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to