Kevin Grittner wrote:
In my experience you can expect the response time benefit of
reducing the size of your connection pool to match available
resources to be more noticeable than the throughput improvements.
This directly contradicts many people's intuition, revealing the
downside of "gut fee
Greg Smith wrote:
> Kevin Grittner wrote:
>> Of course, the only way to really know some of these numbers is
>> to test your actual application on the real hardware under
>> realistic load; but sometimes you can get a reasonable
>> approximation from early tests or "gut feel" based on experience
>
Kevin Grittner wrote:
Of course, the only way to really know some of these numbers is to
test your actual application on the real hardware under realistic
load; but sometimes you can get a reasonable approximation from
early tests or "gut feel" based on experience with similar
applications.
And
On Thu, Sep 09, 2010 at 10:38:16AM -0400, Alvaro Herrera wrote:
- Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:
-
- > Thanks for the insight. we're currently in performance testing of the
- > app. Currently, the JVM is the bottleneck, once we get past that
- > i'm sure it
Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:
> Thanks for the insight. we're currently in performance testing of the
> app. Currently, the JVM is the bottleneck, once we get past that
> i'm sure it will be the database at which point I'll have the kind
> of data you're tal
On Wed, Sep 08, 2010 at 05:27:24PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > My assertian/hope is that the saturation point
- > on this machine should be higher than most.
-
- Here's another way to think about it -- how long do you expect your
- average database request to run?
David Kerr wrote:
> My assertian/hope is that the saturation point
> on this machine should be higher than most.
Here's another way to think about it -- how long do you expect your
average database request to run? (Our top 20 transaction functions
average about 3ms per execution.) What does
On Wed, Sep 08, 2010 at 04:51:17PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Hmm, i'm not following you. I've got 48 cores. that means my
- > sweet-spot active connections would be 96.
-
- Plus your effective spindle count. That can be hard to calculate,
- but you could start by
David Kerr wrote:
> Hmm, i'm not following you. I've got 48 cores. that means my
> sweet-spot active connections would be 96.
Plus your effective spindle count. That can be hard to calculate,
but you could start by just counting spindles on your drive array.
> Now if i were to connection po
On Wed, Sep 08, 2010 at 03:56:24PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Actually, this is real.. that's 2000 connections - connection
- > pooled out to 20k or so. (although i'm pushing for closer to 1000
- > connections).
- >
- > I know that's not the ideal way to go, but it's
David Kerr wrote:
> Actually, this is real.. that's 2000 connections - connection
> pooled out to 20k or so. (although i'm pushing for closer to 1000
> connections).
>
> I know that's not the ideal way to go, but it's what i've got to
> work with.
>
> It IS a huge box though...
FWIW, my benc
On Wed, Sep 08, 2010 at 04:35:28PM -0400, Tom Lane wrote:
- David Kerr writes:
- > should i be running pgbench differently? I tried increasing the # of threads
- > but that didn't increase the number of backend's and i'm trying to simulate
- > 2000 physical backend processes.
-
- The odds are goo
David Kerr writes:
> should i be running pgbench differently? I tried increasing the # of threads
> but that didn't increase the number of backend's and i'm trying to simulate
> 2000 physical backend processes.
The odds are good that if you did get up that high, what you'd find is
pgbench itself
On Wed, Sep 08, 2010 at 03:44:36PM -0400, Tom Lane wrote:
- Greg Smith writes:
- > Tom Lane wrote:
- >> So I think you could get above the FD_SETSIZE limit with a bit of
- >> hacking if you were using 9.0's pgbench. No chance with 8.3 though.
-
- > I believe David can do this easily enough by co
On Wed, Sep 08, 2010 at 03:27:34PM -0400, Greg Smith wrote:
- Tom Lane wrote:
- >As of the 9.0 release, it's possible to run pgbench in a "multi thread"
- >mode, and if you forced the subprocess rather than thread model it looks
- >like the select() limit would be per subprocess rather than global.
Greg Smith writes:
> Tom Lane wrote:
>> So I think you could get above the FD_SETSIZE limit with a bit of
>> hacking if you were using 9.0's pgbench. No chance with 8.3 though.
> I believe David can do this easily enough by compiling a 9.0 source code
> tree with the "--disable-thread-safety" o
Tom Lane wrote:
As of the 9.0 release, it's possible to run pgbench in a "multi thread"
mode, and if you forced the subprocess rather than thread model it looks
like the select() limit would be per subprocess rather than global.
So I think you could get above the FD_SETSIZE limit with a bit of
ha
David Kerr writes:
> I'm running pgbench with a fairly large # of clients and getting this error
> in my PG log file.
> LOG: could not send data to client: Broken pipe
That error suggests that pgbench dropped the connection. You might be
running into some bug or internal limitation in pgbench.
Howdy,
I'm running pgbench with a fairly large # of clients and getting this error in
my PG log file.
Here's the command:
./pgbench -c 1100 testdb -l
I get:
LOG: could not send data to client: Broken pipe
(I had to modify the pgbench.c file to make it go that high, i changed:
MAXCLIENTS = 204
19 matches
Mail list logo