"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
>> It does ("LOG: checkpoints are occurring too frequently (2 seconds apart)")
>> However, I tried increasing checkpoint_segments to 32 (512Mb) making it
>> checkpoint every 15 second o
6 13:55
To: Mikael Carneholm
Cc: Simon Riggs; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Migration study, step 1: bulk write
performanceoptimization
On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
> It does ("LOG: checkpoints are occurring too frequentl
On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
> It does ("LOG: checkpoints are occurring too frequently (2 seconds apart)")
> However, I tried increasing checkpoint_segments to 32 (512Mb) making it
> checkpoint every 15 second or so, but that gave a more uneven insert rate
>
On Wed, 2006-03-22 at 10:04 +0100, Mikael Carneholm wrote:
> but that gave a more uneven insert rate
Not sure what you mean, but happy to review test results.
You should be able to tweak other parameters from here as you had been
trying. Your bgwriter will be of some benefit now if you set it
agg
>>On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:
>> This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), compared to
>> the raw dd result (~75.5Mb/s).
>>
>> I assume this difference is due to:
>> - simultaneous WAL write activity (assumed: for each byte written to the
>> t
On Tue, 2006-03-21 at 15:56, Tom Lane wrote:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> > I've also found that modest increases in commit_siblings and
> > commit_delay help a lot on certain types of imports.
>
> On a data import? Those really should have zero effect on a
> single-process workl
Scott Marlowe <[EMAIL PROTECTED]> writes:
> I've also found that modest increases in commit_siblings and
> commit_delay help a lot on certain types of imports.
On a data import? Those really should have zero effect on a
single-process workload. Or are you doing multiple concurrent imports?
Ron wrote:
> IIRC, Josh Berkus did some benches that suggests in pg 8.x a value of
> 64 - 256 is best for checkpoint_segments as long as you have the RAM
> available.
I think you are confusing checkpoint_segments with wal_buffers.
checkpoint_segments certainly has little to do with available RA
On Mon, 2006-03-20 at 15:17, Ron wrote:
> At 03:44 PM 3/21/2006, Simon Riggs wrote:
> >On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:
> >
> > > This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off),
> > compared to the raw dd result (~75.5Mb/s).
> > >
> > > I assume this differe
At 03:44 PM 3/21/2006, Simon Riggs wrote:
On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:
> This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off),
compared to the raw dd result (~75.5Mb/s).
>
> I assume this difference is due to:
> - simultaneous WAL write activity (assumed:
On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:
> This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), compared to the
> raw dd result (~75.5Mb/s).
>
> I assume this difference is due to:
> - simultaneous WAL write activity (assumed: for each byte written to the
> table, at
On Tue, Mar 21, 2006 at 01:10:32PM +0100, Steinar H. Gunderson wrote:
> On Tue, Mar 21, 2006 at 06:01:58AM -0600, Jim C. Nasby wrote:
> > Are you sure? Metadate changes are probably a lot more common on the WAL
> > partition. In any case, I don't see why there should be a difference.
> > The real i
On Tue, Mar 21, 2006 at 06:01:58AM -0600, Jim C. Nasby wrote:
> Are you sure? Metadate changes are probably a lot more common on the WAL
> partition. In any case, I don't see why there should be a difference.
> The real issue is: is related filesystem metadata sync'd as part of a
> file being fsync
> For the record, that's the wrong way round. For the data partitioning
> metadata journaling is enough, and for the WAL partition you don't need any
> FS journaling at all.
Yes, you're right: the data partition shouldn't loose file creation,
deletion, etc., which is not important for the WAL part
On Tue, Mar 21, 2006 at 12:52:46PM +0100, Csaba Nagy wrote:
> They knew however that for the data partitions no FS journaling is
> needed, and for the WAL partition meta data journaling is enough, so I
> guess they tuned ext3 for this.
For the record, that's the wrong way round. For the data parti
> Did you try mounting ext3 whith data=writeback by chance? People have
> found that makes a big difference in performance.
I'm not sure, there's other people here doing the OS stuff - I'm pretty
much ignorant about what "data=writeback" could mean :-D
They knew however that for the data partitio
On Tue, Mar 21, 2006 at 12:56:18PM +0100, Steinar H. Gunderson wrote:
> On Tue, Mar 21, 2006 at 12:52:46PM +0100, Csaba Nagy wrote:
> > They knew however that for the data partitions no FS journaling is
> > needed, and for the WAL partition meta data journaling is enough, so I
> > guess they tuned
On Tue, Mar 21, 2006 at 06:18:39AM -0600, Jim C. Nasby wrote:
> Basically, you need to know for certain that if PostgreSQL creates a
> file and then fsync's it that that file is safely on disk, and that the
> filesystem knows how to find it (ie: the metadata is also on disk in
> some fashion).
It
On Tue, Mar 21, 2006 at 06:01:58AM -0600, Jim C. Nasby wrote:
On Tue, Mar 21, 2006 at 12:56:18PM +0100, Steinar H. Gunderson wrote:
For the record, that's the wrong way round. For the data partitioning
metadata journaling is enough, and for the WAL partition you don't need any
FS journaling at a
On Tue, Mar 21, 2006 at 01:29:54PM +0100, Steinar H. Gunderson wrote:
> On Tue, Mar 21, 2006 at 06:18:39AM -0600, Jim C. Nasby wrote:
> > Basically, you need to know for certain that if PostgreSQL creates a
> > file and then fsync's it that that file is safely on disk, and that the
> > filesystem k
On Mon, Mar 20, 2006 at 04:19:12PM +0100, Csaba Nagy wrote:
> What I can add from our experience: ext3 turned out lousy for our
> application, and converting to XFS made a quite big improvement for our
> DB load. I don't have hard figures, but I think it was some 30%
> improvement in overall speed,
"Craig A. James" <[EMAIL PROTECTED]> writes:
> If you haven't explored the COPY command yet, check it out. It is stunningly
> fast compared to normal INSERT commands.
Note also that his "benchmark" is testing multiple INSERTs issued within
a loop in a plpgsql function, which has got nearly nothi
Others are reporting better performance on 8.1.x with very large
shared buffers. You may want to try tweaking that possibly as high as
20% of available memory
Dave
On 20-Mar-06, at 9:59 AM, Mikael Carneholm wrote:
Ok, here's the deal:
I am responisble for an exciting project of evaluating
using a 16kb block size (for read performance) will probably be
considered as well.
Hm, this means that when postgres wants to write just one 8k page, the OS
will have to read 16k, replace half of it with the new block, and write
16k again... I guess it should be better to stick with the
Mikael,
I've just recently passed such an experience, i.e. migrating from
another vendor to postgres of a DB about the same size category you
have.
I think you got it right with the fsync turned off during migration
(just don't forget to turn it back after finishing ;-), and using tables
without
Mikael Carneholm wrote:
I am responisble for an exciting project of evaluating migration of a
medium/large application for a well-known swedish car&truck manufacturer
... The goal right now is to find the set of parameters that gives as
short bulk insert time as possible, minimizing downtime whi
26 matches
Mail list logo