On May 30, 2006, at 3:59 PM, Daniel J. Luke wrote:
I should have gprof numbers on a similarly set up test machine
soon ...
gprof output is available at http://geeklair.net/~dluke/
postgres_profiles/
(generated from CVS HEAD as of today).
Any ideas are welcome.
Thanks!
--
Daniel J. Luke
+=
On May 29, 2006, at 7:11 AM, Markus Schaber wrote:
One big transaction every 5 minutes using 'COPY FROM' (instead of
inserts).
Are you using "COPY table FROM '/path/to/file'", having the file
sitting
on the server, or "COPY table FROM STDIN" or psql "/copy", having the
file sitting on the cl
Hi, Daniel,
Daniel J. Luke wrote:
> One big transaction every 5 minutes using 'COPY FROM' (instead of
> inserts).
Are you using "COPY table FROM '/path/to/file'", having the file sitting
on the server, or "COPY table FROM STDIN" or psql "/copy", having the
file sitting on the client?
>From our
nvironment.
i.e., YMMV.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Steinar H.
Gunderson
Sent: Wednesday, May 24, 2006 4:04 PM
To: Daniel J. Luke
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Getting even more insert performance
(250m+rows
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:
> On May 24, 2006, at 4:03 PM, Steinar H. Gunderson wrote:
> >Have you tried fiddling with the checkpointing settings? Check your
> >logs --
> >if you get a warning about checkpoints being too close together,
> >that should
> >give
On May 24, 2006, at 4:24 PM, Larry Rosenman wrote:
Also, is pg_xlog on the same or different spindles from the rest of
the
PG Data directory?
It's sitting on the same disk array (but I'm doing 1 transaction
every 5 minutes, and I'm not near the array's sustained write
capacity, so I don't
Daniel J. Luke wrote:
> On May 24, 2006, at 4:12 PM, Larry Rosenman wrote:
>> are the batches single insert's, or within a big transaction?
>> If the former, the latter is a big win.
>
> One big transaction every 5 minutes using 'COPY FROM' (instead of
> inserts).
>
>> Also, what release(s) are y
On May 24, 2006, at 4:12 PM, Larry Rosenman wrote:
are the batches single insert's, or within a big transaction?
If the former, the latter is a big win.
One big transaction every 5 minutes using 'COPY FROM' (instead of
inserts).
Also, what release(s) are you running?
8.1.x (I think we're
On May 24, 2006, at 4:13 PM, Steinar H. Gunderson wrote:
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:
no warnings in the log (I did change the checkpoint settings when I
set up the database, but didn't notice an appreciable difference in
insert performance).
How about wal_bu
> The data gets inserted in batches every 5 minutes and I potentially
> have people querying it constantly, so I can't remove and re-create
> the index.
How live does your data need to be? One possibility would be to use a
separate table for each batch instead of a separate table per day,
cre
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:
> no warnings in the log (I did change the checkpoint settings when I
> set up the database, but didn't notice an appreciable difference in
> insert performance).
How about wal_buffers? Upping it might not help all that much if on
Daniel J. Luke wrote:
> On May 24, 2006, at 4:02 PM, Dave Dutcher wrote:
>> If you can live with possible database corruption, you could try
>> turning Fsync off. For example if you could just reinsert the data
>> on the off chance a hardware failure corrupts the database, you
>> might get a decen
On May 24, 2006, at 4:03 PM, Steinar H. Gunderson wrote:
Have you tried fiddling with the checkpointing settings? Check your
logs --
if you get a warning about checkpoints being too close together,
that should
give you quite some boost.
no warnings in the log (I did change the checkpoint se
On May 24, 2006, at 4:02 PM, Dave Dutcher wrote:
If you can live with possible database corruption, you could try
turning
Fsync off. For example if you could just reinsert the data on the off
chance a hardware failure corrupts the database, you might get a
decent
improvement.
I tried, but
On Wed, May 24, 2006 at 03:45:17PM -0400, Daniel J. Luke wrote:
> Things I've already done that have made a big difference:
> - modified postgresql.conf shared_buffers value
> - converted to COPY from individual insert statements
> - changed BLCKSZ to 32768
Have you tried fiddling with the checkpo
ct: [PERFORM] Getting even more insert performance
> (250m+rows/day)
>
>
> I have a system that currently inserts ~ 250 million rows per day (I
> have about 10k more raw data than that, but I'm at the limit of my
> ability to get useful insert performance out of postgr
I have a system that currently inserts ~ 250 million rows per day (I
have about 10k more raw data than that, but I'm at the limit of my
ability to get useful insert performance out of postgres).
Things I've already done that have made a big difference:
- modified postgresql.conf shared_buffer
17 matches
Mail list logo