Hi,
Le 27 mars 09 à 21:42, Sam Mason a écrit :
OK, that's turned out to be a good point. I've now written five
different versions and they don't seem to give the results I'm
expecting
at all!
If you're that much willing to have a good concurrent load simulator
client for postgresql, my t
On Wed, Mar 25, 2009 at 01:48:03PM -0500, Kenneth Marshall wrote:
> On Wed, Mar 25, 2009 at 05:56:02PM +, Sam Mason wrote:
> > On Wed, Mar 25, 2009 at 12:01:57PM -0500, Kenneth Marshall wrote:
> > > Are you sure that you are able to actually drive the load at the
> > > high end of the test regi
On Wed, Mar 25, 2009 at 05:56:02PM +, Sam Mason wrote:
> On Wed, Mar 25, 2009 at 12:01:57PM -0500, Kenneth Marshall wrote:
> > On Wed, Mar 25, 2009 at 03:58:06PM +, Sam Mason wrote:
> > > #!/bin/bash
> > > nclients=$1
> > > ittrs=$2
> > > function gensql {
> > > echo "INSERT
On Wed, Mar 25, 2009 at 12:01:57PM -0500, Kenneth Marshall wrote:
> On Wed, Mar 25, 2009 at 03:58:06PM +, Sam Mason wrote:
> > #!/bin/bash
> > nclients=$1
> > ittrs=$2
> > function gensql {
> > echo "INSERT INTO bm (c,v) VALUES ('$1','0');"
> > for (( i = 1; i < $ittrs; i++
On Wed, Mar 25, 2009 at 03:58:06PM +, Sam Mason wrote:
> On Wed, Mar 25, 2009 at 02:38:45PM +, Greg Stark wrote:
> > Sam Mason writes:
> > > Why does it top out so much though? It goes up nicely to around ten
> > > clients (I tested with 8 and 12) and then tops out and levels off. The
>
On Wed, Mar 25, 2009 at 02:38:45PM +, Greg Stark wrote:
> Sam Mason writes:
> > Why does it top out so much though? It goes up nicely to around ten
> > clients (I tested with 8 and 12) and then tops out and levels off. The
> > log is chugging along at around 2MB/s which is well above where t
Greg Stark writes:
> What happens is that the first backend comes along, finds nobody else waiting
> and does an fsync for its own work. While that fsync is happening the rest of
> the crowd -- N-1 backends -- comes along and blocks waiting on the lock. The
> first backend to get the lock fsyncs t
Sam Mason writes:
>> You can see this
>> most easily by doing inserts into a system that's limited by a slow fsync,
>> like a single disk without write cache where you're bound by RPM speed.
>
> Yes, I did a test like this and wasn't getting the scaling I was
> expecting--hence my post. I though
[ I'm arbitrarily replying to Greg as his was the most verbose ]
On Tue, Mar 24, 2009 at 11:23:36PM -0400, Greg Smith wrote:
> On Tue, 24 Mar 2009, Sam Mason wrote:
> >The conceptual idea is to have at most one outstanding flush for the
> >log going through the filesystem at any one time.
>
> Qu
On Tue, 24 Mar 2009, Sam Mason wrote:
The conceptual idea is to have at most one outstanding flush for the
log going through the filesystem at any one time.
Quoting from src/backend/access/transam/xlog.c, inside XLogFlush:
"Since fsync is usually a horribly expensive operation, we try to
pig
Sorry for top-posting -- blame apple.
Isn't this just a good description of exactly how it works today?
--
Greg
On 24 Mar 2009, at 20:51, Tom Lane wrote:
Sam Mason writes:
The conceptual idea is to have at most one outstanding flush for the
log going through the filesystem at any one time
> "Sam" == Sam Mason writes:
Sam> Hi,
Sam> I had an idea while going home last night and still can't think
Sam> why it's not implemented already as it seems obvious.
[snip idea about WAL fsyncs]
Unless I'm badly misunderstanding you, I think it already has (long
ago).
Only the holder of
Sam Mason writes:
> The conceptual idea is to have at most one outstanding flush for the
> log going through the filesystem at any one time.
I think this is a variant of the "group commit" or "commit delay"
stuff that's already in there (and doesn't work real well :-().
The problem is to sync mul
13 matches
Mail list logo