Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-30 Thread Daniel J. Luke
On May 30, 2006, at 3:59 PM, Daniel J. Luke wrote: I should have gprof numbers on a similarly set up test machine soon ... gprof output is available at http://geeklair.net/~dluke/ postgres_profiles/ (generated from CVS HEAD as of today). Any ideas are welcome. Thanks! -- Daniel J. Luke +=

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-30 Thread Daniel J. Luke
On May 29, 2006, at 7:11 AM, Markus Schaber wrote: One big transaction every 5 minutes using 'COPY FROM' (instead of inserts). Are you using "COPY table FROM '/path/to/file'", having the file sitting on the server, or "COPY table FROM STDIN" or psql "/copy", having the file sitting on the cl

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-29 Thread Markus Schaber
Hi, Daniel, Daniel J. Luke wrote: > One big transaction every 5 minutes using 'COPY FROM' (instead of > inserts). Are you using "COPY table FROM '/path/to/file'", having the file sitting on the server, or "COPY table FROM STDIN" or psql "/copy", having the file sitting on the client? >From our

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Ian Westmacott
nvironment. i.e., YMMV. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steinar H. Gunderson Sent: Wednesday, May 24, 2006 4:04 PM To: Daniel J. Luke Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] Getting even more insert performance (250m+rows

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Jim C. Nasby
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote: > On May 24, 2006, at 4:03 PM, Steinar H. Gunderson wrote: > >Have you tried fiddling with the checkpointing settings? Check your > >logs -- > >if you get a warning about checkpoints being too close together, > >that should > >give

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Daniel J. Luke
On May 24, 2006, at 4:24 PM, Larry Rosenman wrote: Also, is pg_xlog on the same or different spindles from the rest of the PG Data directory? It's sitting on the same disk array (but I'm doing 1 transaction every 5 minutes, and I'm not near the array's sustained write capacity, so I don't

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Larry Rosenman
Daniel J. Luke wrote: > On May 24, 2006, at 4:12 PM, Larry Rosenman wrote: >> are the batches single insert's, or within a big transaction? >> If the former, the latter is a big win. > > One big transaction every 5 minutes using 'COPY FROM' (instead of > inserts). > >> Also, what release(s) are y

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Daniel J. Luke
On May 24, 2006, at 4:12 PM, Larry Rosenman wrote: are the batches single insert's, or within a big transaction? If the former, the latter is a big win. One big transaction every 5 minutes using 'COPY FROM' (instead of inserts). Also, what release(s) are you running? 8.1.x (I think we're

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Daniel J. Luke
On May 24, 2006, at 4:13 PM, Steinar H. Gunderson wrote: On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote: no warnings in the log (I did change the checkpoint settings when I set up the database, but didn't notice an appreciable difference in insert performance). How about wal_bu

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Mark Lewis
> The data gets inserted in batches every 5 minutes and I potentially > have people querying it constantly, so I can't remove and re-create > the index. How live does your data need to be? One possibility would be to use a separate table for each batch instead of a separate table per day, cre

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Steinar H. Gunderson
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote: > no warnings in the log (I did change the checkpoint settings when I > set up the database, but didn't notice an appreciable difference in > insert performance). How about wal_buffers? Upping it might not help all that much if on

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Larry Rosenman
Daniel J. Luke wrote: > On May 24, 2006, at 4:02 PM, Dave Dutcher wrote: >> If you can live with possible database corruption, you could try >> turning Fsync off. For example if you could just reinsert the data >> on the off chance a hardware failure corrupts the database, you >> might get a decen

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Daniel J. Luke
On May 24, 2006, at 4:03 PM, Steinar H. Gunderson wrote: Have you tried fiddling with the checkpointing settings? Check your logs -- if you get a warning about checkpoints being too close together, that should give you quite some boost. no warnings in the log (I did change the checkpoint se

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Daniel J. Luke
On May 24, 2006, at 4:02 PM, Dave Dutcher wrote: If you can live with possible database corruption, you could try turning Fsync off. For example if you could just reinsert the data on the off chance a hardware failure corrupts the database, you might get a decent improvement. I tried, but

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Steinar H. Gunderson
On Wed, May 24, 2006 at 03:45:17PM -0400, Daniel J. Luke wrote: > Things I've already done that have made a big difference: > - modified postgresql.conf shared_buffers value > - converted to COPY from individual insert statements > - changed BLCKSZ to 32768 Have you tried fiddling with the checkpo

Re: [PERFORM] Getting even more insert performance (250m+rows/day)

2006-05-24 Thread Dave Dutcher
If you can live with possible database corruption, you could try turning Fsync off. For example if you could just reinsert the data on the off chance a hardware failure corrupts the database, you might get a decent improvement. Also have you tried creating the index after you have inserted all y