Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Tom Lane) would write: > Since there's no performance difference at pg_dump time, I can't see any > advantage to freezing your decision then.
This parallels the common suggestion of throwing an ANALYZE in at the bottom of a pg_dump script. On that particular note, I'd think it preferable to analyze after loading each table, since the data for the specific table will still be in memory. But that's a _bit_ of a change of subject. This looks like something where a "hook" would be valuable such that there is something in the pg_dump that can be configured AFTER the fact to control how it's loaded. It would surely seem valuable to have a way of making loads go As Fast As Possible, even with the possibility of "breakneck speed" offering the possibility of actually getting seriously injured (breaking one's neck?). If the hardware fails during the recovery, consider that you were _recovering_ from a _backup_; that surely ought to be an eminently redoable operation, quite unlike accepting a random SQL request from a user. I have done some "recoveries" recently (well, more precisely, "installs") by taking a tarball of a pre-existing database and dropping it into place. I had no problem with the fact that if my hand slipped and hit ^C at the wrong moment ("quelle horreur!"), I would be forced to restart the "cd $TARGETDIR; tar xfvz Flex.tgz" process. I would be pretty "game" for a near-single-user-mode approach that would turn off some of the usual functionality that we knew we didn't need because the data source was an already-committed-and-FK-checked set of data. -- output = reverse("ac.notelrac.teneerf" "@" "454aa") http://www.ntlug.org/~cbbrowne/spiritual.html "Another result of the tyranny of Pascal is that beginners don't use function pointers." --Rob Pike ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html