> Bulk data imports of this size I've done with minimal pain by simply
> breaking the raw data into chunks (10M records becomes 10 files of
> 1M records), on a separate spindle from the database, and performing
> multiple COPY commands but no more than 1 COPY per server core.
> I
>Don't insert data into an indexed table. A very important point with
>bulk-loading is that you should load all the data first, then create
>the indexes. Running multiple (different) CREATE INDEX queries in
>parallel can additionally save a lot of time. Also don't move data
>back and forth betwee
I'm setting up my first PostgreSQL server to replace an existing MySQL server.
I've been reading Gregory Smith's book Postgres 9.0 High Performance and also
Riggs/Krosing's PostgreSQL 9 Administration Cookbook. While both of these
books are excellent, I am completely new to PostgreSQL and