Re: [PERFORM] Question on pgbench output

2009-04-05 Thread Tom Lane
David Kerr writes: > Fortunately the network throughput issue is not mine to solve. > Would it be fair to say that with the pgbench output i've given so far > that if all my users clicked "go" at the same time (i.e., worst case > scenario), i could expect (from the database) about 8 second respo

Re: [PERFORM] Question on pgbench output

2009-04-05 Thread David Kerr
Tom Lane wrote: Simon Riggs writes: On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote: 400 concurrent users doesn't mean that they're pulling 1.5 megs / second every second. There's a world of difference between 400 connected and 400 concurrent users. You've been testing 400 concurrent us

Re: [PERFORM] Question on pgbench output

2009-04-05 Thread Tom Lane
Simon Riggs writes: > On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote: >> 400 concurrent users doesn't mean that they're pulling 1.5 megs / >> second every second. > There's a world of difference between 400 connected and 400 concurrent > users. You've been testing 400 concurrent users, yet w

Re: [PERFORM] Question on pgbench output

2009-04-05 Thread Simon Riggs
On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote: > 400 concurrent users doesn't mean that they're pulling 1.5 megs / > second every second. Just that they could potentially pull 1.5 megs at > any one second. most likely there is a 6 (minimum) to 45 second > (average) gap between each individu

Re: [PERFORM] Question on pgbench output

2009-04-04 Thread David Kerr
On Fri, Apr 03, 2009 at 10:35:58PM -0400, Greg Smith wrote: - On Fri, 3 Apr 2009, Tom Lane wrote: - - and a bunch of postmaster ones, with "-c" (or by hitting "c" while top is - running) you can even see what they're all doing. If the pgbench process - is consuming close to 100% of a CPU's time

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread Greg Smith
On Fri, 3 Apr 2009, Tom Lane wrote: However, I don't think anyone else has been pgbench'ing transactions where client-side libpq has to absorb (and then discard) a megabyte of data per xact. I wouldn't be surprised that that eats enough CPU to make it an issue. David, did you pay any attention

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
Gah - sorry, setting up pgbouncer for my Plan B. I meant -pgbench- Dave Kerr On Fri, Apr 03, 2009 at 04:34:58PM -0700, David Kerr wrote: - On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote: - - Greg Smith writes: - - > pgbench is extremely bad at simulating large numbers of clients. Th

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote: - Greg Smith writes: - > pgbench is extremely bad at simulating large numbers of clients. The - > pgbench client operates as a single thread that handles both parsing the - > input files, sending things to clients, and processing their r

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread Tom Lane
Greg Smith writes: > pgbench is extremely bad at simulating large numbers of clients. The > pgbench client operates as a single thread that handles both parsing the > input files, sending things to clients, and processing their responses. > It's very easy to end up in a situation where that bo

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread Tom Lane
David Kerr writes: > On Fri, Apr 03, 2009 at 04:43:29PM -0400, Tom Lane wrote: > - How much more "real" is the target hardware than what you have? > - You appear to need about a factor of 10 better disk throughput than > - you have, and that's not going to be too cheap. > The hardware i'm using i

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread Greg Smith
On Fri, 3 Apr 2009, David Kerr wrote: Here is my transaction file: \setrandom iid 1 5 BEGIN; SELECT content FROM test WHERE item_id = :iid; END; Wrapping a SELECT in a BEGIN/END block is unnecessary, and it will significantly slow down things for two reason: the transactions overhead an

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread Scott Marlowe
On Fri, Apr 3, 2009 at 1:53 PM, David Kerr wrote: > Here is my transaction file: > \setrandom iid 1 5 > BEGIN; > SELECT content FROM test WHERE item_id = :iid; > END; > > and then i executed: > pgbench -c 400 -t 50 -f trans.sql -l > > The results actually have surprised me, the database isn't

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
On Fri, Apr 03, 2009 at 04:43:29PM -0400, Tom Lane wrote: - > I'm not really sure how to evaulate the tps, I've read in this forum that - > some folks are getting 2k tps so this wouldn't appear to be good to me. - - Well, you're running a custom transaction definition so comparing your - number to

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread Tom Lane
David Kerr writes: > The results actually have surprised me, the database isn't really tuned > and i'm not working on great hardware. But still I'm getting: > caling factor: 1 > number of clients: 400 > number of transactions per client: 50 > number of transactions actually processed: 2/2

[PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
Hello! Sorry for the wall of text here. I'm working on a performance POC and I'm using pgbench and could use some advice. Mostly I want to ensure that my test is valid and that I'm using pgbench properly. The story behind the POC is that my developers want to pull web items from the database (no