Re: [GENERAL] hundreds of millions row dBs

2005-01-05 Thread Wes
> Out of curiosity, what value of sort_mem were you using? > > (In PG 8.0, the sort memory setting used by CREATE INDEX will be > maintenance_work_mem not work_mem, which should help in getting larger > values to be used. But in existing releases you usually need to think > about a manual tweak.)

Re: [GENERAL] hundreds of millions row dBs

2005-01-04 Thread Pierre-Frédéric Caillaud
To speed up load : - make less checkpoints (tweak checkpoint interval and other parameters in config) - disable fsync (not sure if it really helps) - have source data, database tables, and log on three physically different disks - have the temporary on a different disk too, or in ramdisk

Re: [GENERAL] hundreds of millions row dBs

2005-01-04 Thread Tom Lane
"Dann Corbit" <[EMAIL PROTECTED]> writes: > Here is an instance where a really big ram disk might be handy. > You could create a database on a big ram disk and load it, then build > the indexes. > Then shut down the database and move it to hard disk. Actually, if you have a RAM disk, just change t

Re: [GENERAL] hundreds of millions row dBs

2005-01-04 Thread Tom Lane
Wes <[EMAIL PROTECTED]> writes: > As I recall, the last time we rebuilt our database, it took about 3 hours to > import 265 million rows of data. It then took another 16 hours to rebuild > all the indexes. Out of curiosity, what value of sort_mem were you using? (In PG 8.0, the sort memory setti

Re: [GENERAL] hundreds of millions row dBs

2005-01-04 Thread Dann Corbit
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Wes Sent: Tuesday, January 04, 2005 8:59 AM To: Guy Rouillier; pgsql-general@postgresql.org; Greer, Doug [NTK] Subject: Re: [GENERAL] hundreds of millions row dBs > We're getting about 64 mill

Re: [GENERAL] hundreds of millions row dBs

2005-01-04 Thread Wes
> We're getting about 64 million rows inserted in about 1.5 hrs into a > table with a multiple-column primary key - that's the only index. > That's seems pretty good to me - SQL Loader takes about 4 hrs to do the > same job. As I recall, the last time we rebuilt our database, it took about 3 hours

Re: [GENERAL] hundreds of millions row dBs

2005-01-03 Thread Tom Lane
"Guy Rouillier" <[EMAIL PROTECTED]> writes: > Greer, Doug wrote: >> I am interested in using Postgresql for a dB of hundreds of >> millions of rows in several tables. The COPY command seems to be way >> too slow. Is there any bulk import program similar to Oracle's SQL >> loader for Postgresql? S

Re: [GENERAL] hundreds of millions row dBs

2005-01-03 Thread Guy Rouillier
Greer, Doug wrote: > Hello all, > I am interested in using Postgresql for a dB of hundreds of > millions of rows in several tables. The COPY command seems to be way > too slow. Is there any bulk import program similar to Oracle's SQL > loader for Postgresql? Sincerely, > Doug Greer We'