On 5/23/13 7:39 AM, Sachin D. Bhosale-Kotwal wrote:
So i am not getting why spike occure at *12:09:14 *only*.*
This could easily be caused by something outside of the test itself.
Background processes. A monitoring system kicking in to write some data
to disk will cause a drop like this.
T
Hello,
I am testing performance of postgresql application using pgbench.
I am getting spike in results(graphs) as shown in attached graph due to
throughput drop at that time.
pgbench itself doing checkpoint on server (where queries are running)
before and after test starts.
pgbench is running on
On 2/26/13 4:45 PM, Costin Oproiu wrote:
First, I've got no good explanation for this and it would be nice to
have one. As far as I can understand this issue, the heaviest update
traffic should be on the branches table and should affect all tests.
From http://www.postgresql.org/docs/current/sta
On Thu, Feb 28, 2013 at 11:20 AM, Pavan Deolasee
wrote:
> On Wed, Feb 27, 2013 at 3:15 AM, Costin Oproiu
> wrote:
>> I took some time to figure out a reasonable tuning for my fresh 9.2.3
>> installation when I've noticed the following:
>>
>> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.
On Wed, Feb 27, 2013 at 3:15 AM, Costin Oproiu wrote:
> I took some time to figure out a reasonable tuning for my fresh 9.2.3
> installation when I've noticed the following:
>
> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U
> postgres -i -s 1
> ...
> 10 tuples done.
> ..
I took some time to figure out a reasonable tuning for my fresh 9.2.3
installation when I've noticed the following:
[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U
postgres -i -s 1
...
10 tuples done.
...
vacuum...done.
[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1
On Thu, Jan 27, 2011 at 2:26 PM, DM wrote:
> Is there anything that i can do to still improve 9.0.2 performance. the
> performance (tps) that i got is only 10% is it ideal, or should i need to
> get more?
Well, the settings you specified don't sound like the values that we
normally recommend.
ht
Pg 9.0.2 is performing better than pg8.4.1
There are more transactions per second in pg9.0.2 than in pg8.4.1, which is
a better thing.
also below are kernel parameters that i used.
-- Shared Memory Limits
max number of segments = 4096
max seg size (kbytes) = 15099492
max total shar
Em 10-01-2011 05:25, Greg Smith escreveu:
Euler Taveira de Oliveira wrote:
Em 07-01-2011 22:59, Greg Smith escreveu:
setrandom: invalid maximum number -2147467296
It is failing at atoi() circa pgbench.c:1036. But it just the first
one. There are some variables and constants that need to be co
Euler Taveira de Oliveira wrote:
Em 07-01-2011 22:59, Greg Smith escreveu:
setrandom: invalid maximum number -2147467296
It is failing at atoi() circa pgbench.c:1036. But it just the first
one. There are some variables and constants that need to be converted
to int64 and some functions that m
Em 07-01-2011 22:59, Greg Smith escreveu:
setrandom: invalid maximum number -2147467296
It is failing at atoi() circa pgbench.c:1036. But it just the first one. There
are some variables and constants that need to be converted to int64 and some
functions that must speak 64-bit such as getrand()
At one point I was working on a patch to pgbench to have it adopt 64-bit
math internally even when running on 32 bit platforms, which are
currently limited to a dataabase scale of ~4000 before the whole process
crashes and burns. But since the range was still plenty high on a
64-bit system, I
Kevin Grittner wrote:
In my experience you can expect the response time benefit of
reducing the size of your connection pool to match available
resources to be more noticeable than the throughput improvements.
This directly contradicts many people's intuition, revealing the
downside of "gut fee
Greg Smith wrote:
> Kevin Grittner wrote:
>> Of course, the only way to really know some of these numbers is
>> to test your actual application on the real hardware under
>> realistic load; but sometimes you can get a reasonable
>> approximation from early tests or "gut feel" based on experience
>
Kevin Grittner wrote:
Of course, the only way to really know some of these numbers is to
test your actual application on the real hardware under realistic
load; but sometimes you can get a reasonable approximation from
early tests or "gut feel" based on experience with similar
applications.
And
On Thu, Sep 09, 2010 at 10:38:16AM -0400, Alvaro Herrera wrote:
- Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:
-
- > Thanks for the insight. we're currently in performance testing of the
- > app. Currently, the JVM is the bottleneck, once we get past that
- > i'm sure it
Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:
> Thanks for the insight. we're currently in performance testing of the
> app. Currently, the JVM is the bottleneck, once we get past that
> i'm sure it will be the database at which point I'll have the kind
> of data you're tal
On Wed, Sep 08, 2010 at 05:27:24PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > My assertian/hope is that the saturation point
- > on this machine should be higher than most.
-
- Here's another way to think about it -- how long do you expect your
- average database request to run?
David Kerr wrote:
> My assertian/hope is that the saturation point
> on this machine should be higher than most.
Here's another way to think about it -- how long do you expect your
average database request to run? (Our top 20 transaction functions
average about 3ms per execution.) What does
On Wed, Sep 08, 2010 at 04:51:17PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Hmm, i'm not following you. I've got 48 cores. that means my
- > sweet-spot active connections would be 96.
-
- Plus your effective spindle count. That can be hard to calculate,
- but you could start by
David Kerr wrote:
> Hmm, i'm not following you. I've got 48 cores. that means my
> sweet-spot active connections would be 96.
Plus your effective spindle count. That can be hard to calculate,
but you could start by just counting spindles on your drive array.
> Now if i were to connection po
On Wed, Sep 08, 2010 at 03:56:24PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Actually, this is real.. that's 2000 connections - connection
- > pooled out to 20k or so. (although i'm pushing for closer to 1000
- > connections).
- >
- > I know that's not the ideal way to go, but it's
David Kerr wrote:
> Actually, this is real.. that's 2000 connections - connection
> pooled out to 20k or so. (although i'm pushing for closer to 1000
> connections).
>
> I know that's not the ideal way to go, but it's what i've got to
> work with.
>
> It IS a huge box though...
FWIW, my benc
On Wed, Sep 08, 2010 at 04:35:28PM -0400, Tom Lane wrote:
- David Kerr writes:
- > should i be running pgbench differently? I tried increasing the # of threads
- > but that didn't increase the number of backend's and i'm trying to simulate
- > 2000 physical backend processes.
-
- The odds are goo
David Kerr writes:
> should i be running pgbench differently? I tried increasing the # of threads
> but that didn't increase the number of backend's and i'm trying to simulate
> 2000 physical backend processes.
The odds are good that if you did get up that high, what you'd find is
pgbench itself
On Wed, Sep 08, 2010 at 03:44:36PM -0400, Tom Lane wrote:
- Greg Smith writes:
- > Tom Lane wrote:
- >> So I think you could get above the FD_SETSIZE limit with a bit of
- >> hacking if you were using 9.0's pgbench. No chance with 8.3 though.
-
- > I believe David can do this easily enough by co
On Wed, Sep 08, 2010 at 03:27:34PM -0400, Greg Smith wrote:
- Tom Lane wrote:
- >As of the 9.0 release, it's possible to run pgbench in a "multi thread"
- >mode, and if you forced the subprocess rather than thread model it looks
- >like the select() limit would be per subprocess rather than global.
Greg Smith writes:
> Tom Lane wrote:
>> So I think you could get above the FD_SETSIZE limit with a bit of
>> hacking if you were using 9.0's pgbench. No chance with 8.3 though.
> I believe David can do this easily enough by compiling a 9.0 source code
> tree with the "--disable-thread-safety" o
Tom Lane wrote:
As of the 9.0 release, it's possible to run pgbench in a "multi thread"
mode, and if you forced the subprocess rather than thread model it looks
like the select() limit would be per subprocess rather than global.
So I think you could get above the FD_SETSIZE limit with a bit of
ha
David Kerr writes:
> I'm running pgbench with a fairly large # of clients and getting this error
> in my PG log file.
> LOG: could not send data to client: Broken pipe
That error suggests that pgbench dropped the connection. You might be
running into some bug or internal limitation in pgbench.
Howdy,
I'm running pgbench with a fairly large # of clients and getting this error in
my PG log file.
Here's the command:
./pgbench -c 1100 testdb -l
I get:
LOG: could not send data to client: Broken pipe
(I had to modify the pgbench.c file to make it go that high, i changed:
MAXCLIENTS = 204
Craig James wrote:
synchronous_commit = off
full_page_writes = off
I don't have any numbers handy on how much turning synchronous_commit
and full_page_writes off improves performance on a system with a
battery-backed write cache. Your numbers are therefore a bit inflated
against similar one
On Mon, Jun 28, 2010 at 1:12 PM, Craig James wrote:
> On 6/25/10 12:03 PM, Greg Smith wrote:
>>
>> Craig James wrote:
>>>
>>> I've got a new server and want to make sure it's running well.
>>
>> Any changes to the postgresql.conf file? Generally you need at least a
>> moderate shared_buffers (1GB
On 6/25/10 12:03 PM, Greg Smith wrote:
Craig James wrote:
I've got a new server and want to make sure it's running well.
Any changes to the postgresql.conf file? Generally you need at least a
moderate shared_buffers (1GB or so at a minimum) and checkpoint_segments
(32 or higher) in order for t
Craig James wrote:
I've got a new server and want to make sure it's running well.
Any changes to the postgresql.conf file? Generally you need at least a
moderate shared_buffers (1GB or so at a minimum) and checkpoint_segments
(32 or higher) in order for the standard pgbench test to give good
On Fri, Jun 25, 2010 at 2:53 PM, Craig James wrote:
> I've got a new server and want to make sure it's running well. Are these
> pretty decent numbers?
>
> 8 cores (2x4 Intel Nehalem 2 GHz)
> 12 GB memory
> 12 x 7200 SATA 500 GB disks
> 3WARE 9650SE-12ML RAID controller with BBU
> WAL on ext2, 2
I've got a new server and want to make sure it's running well. Are these
pretty decent numbers?
8 cores (2x4 Intel Nehalem 2 GHz)
12 GB memory
12 x 7200 SATA 500 GB disks
3WARE 9650SE-12ML RAID controller with BBU
WAL on ext2, 2 disks: RAID1 500GB, blocksize=4096
Database on ext4, 8 disks:
Reydan Cankur wrote:
1) For calculating time to get the TPS, is pgbench using the wall
clock time or cpu time?
2) How is TPS calculated?
Wall clock time.
TPS=transactions processed / (end time - start time)
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Suppor
Hi,
I am using pgbench for running tests on PostgreSQL.
I have a few questions;
1) For calculating time to get the TPS, is pgbench using the wall
clock time or cpu time?
2)How is TPS calculated?
Thanks in advance,
Reydan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgres
Reydan Cankur wrote:
I have compiled PostgreSQL 8.4 from source code and in order to
install pgbench, I go under contrib folder and run below commands:
make
make install
when I write pgbench as a command system cannot find pgbench as a command.
Do regular PostgreSQL command such as psql work
Hi All,
I have compiled PostgreSQL 8.4 from source code and in order to install
pgbench, I go under contrib folder and run below commands:
make
make install
when I write pgbench as a command system cannot find pgbench as a command.
As a result I cannot use pgbench-tools because system does not int
>Does the one that ships in the installer not work?//Magnusit does work.*putting ashes on my head*Googled around and only found pgbench.c; never looked in program directory. Sorry, my mistake.
Harald-- GHUM Harald Massa
persuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0
> Hello Performancers,
>
> has anyone a pgBench tool running on Windows?
Does the one that ships in the installer not work?
//Magnus
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs
Hello Performancers,has anyone a pgBench tool running on Windows?I want to experiment with various settings to tune; and would prefer using something ready made before coming up with my own misstakes.
Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stu
Hi All,
Here are some of the results i got after performing pgbench marking between postgresql 7.4.5 and postgresql 8.1.2. having parameters with same values in the postgresql.conf file.
[EMAIL PROTECTED]:/newdisk/postgres/data> /usr/local/pgsql7.4.5/bin/pgbench -c 10 -t 1 regressionstart
Well, it tells you how many transactions per second it was able to do.
Do you have specific questions?
On Thu, Feb 02, 2006 at 12:39:59PM +0530, Pradeep Parmar wrote:
> Hi,
>
> I'm fairly new to PostgreSQL. I was trying pgbench , but could not
> understand the output . Can anyone help me out to u
Hi,I'm fairly new to PostgreSQL. I was trying pgbench , but could not understand the output . Can anyone help me out to understand the output of pgbenchPradeep
On Wed, 2005-11-02 at 21:16 +1100, Gavin Sherry wrote:
> connections are updating the branches table heavily. As an aside, did you
> initialise with a scaling factor of 10 to match your level of concurrency?
Yep, I did.
> that. The hackers list archive also contains links to the testing Mark
> Wo
On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:
> Hi Gavin,
>
> Thanks for answering.
>
> On Tue, 2005-11-01 at 20:16 +1100, Gavin Sherry wrote:
> > On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:
> > > 1. Is there a repository somewhere that shows results, using and
> > > documenting different kinds of
Hi Gavin,
Thanks for answering.
On Tue, 2005-11-01 at 20:16 +1100, Gavin Sherry wrote:
> On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:
> > 1. Is there a repository somewhere that shows results, using and
> > documenting different kinds of hard- and software setups so that I can
> > compare my resu
On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:
> Hi,
>
> I am trying to optimize my Debian Sarge AMD64 PostgreSQL 8.0
> installation, based on the recommendations from "the Annotated
> POSTGRESQL.CONF Guide for
> PostgreSQL"
> (http://www.powerpostgresql.com/Downloads/annotated_conf_80.html). To se
Hi,
I am trying to optimize my Debian Sarge AMD64 PostgreSQL 8.0
installation, based on the recommendations from "the Annotated
POSTGRESQL.CONF Guide for
PostgreSQL" (http://www.powerpostgresql.com/Downloads/annotated_conf_80.html).
To see the result of the recommendations I use pgbench from
po
pgbench is located in the contrib directory of any source tarball,
along with a README that serves as documentation.
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i™
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN
Phillip,
> I am looking for the latest pgbench and documentation.
Currently they are packaged with the PostgreSQL source code.
However, if you're looking for a serious benchmark, may I suggest OSDL's DBT2?
It's substantially similar to TPC-C.
http://sourceforge.net/projects/osdldbt
What's y
I am looking for the latest pgbench and documentation.
If someone know where I can locate them it would save a lot of search time.
Thanks
Philip Pinkerton
TPC-C Benchmarks Sybase
Independant Consultant
Rio de Janeiro, RJ, Brazil 22031-010
---(end of broadcast)---
Tom,
Honestly, you've got me. It was either comment from Tom Lane or Josh
that the os is caching the results (I may not be using the right terms
here), so I thought it the database is dropped and recreated, I would
see less of a skew (or variation) in the results. Someone which to comment?
Stev
Considering the default vacuuming behavior, why would this be?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i™
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Apr 25, 2005, at 12:18 PM,
Tom,
Just a quick thought: after each run/sample of pgbench, I drop the
database and recreate it. When I don't my results become more skewed.
Steve Poe
Thomas F.O'Connell wrote:
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll go back and do that.
I
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll go back and do that.
I was seeing oscillation across the majority of iterations in the 25
clients/1000 transaction runs on both database versions.
I've got my box specs and configuration files posted.
>There was some interesting oscillation behavior in both version of
postgres that occurred with 25 >clients and 1000 transactions at a
scaling factor of 100. This was repeatable with the distribution
>version of pgbench run iteratively from the command line. I'm not sure
how to explain this.
T
Steve,
Per your and Tom's recommendations, I significantly increased the
number of transactions used for testing. See my last post.
The database will have pretty heavy mixed use, i.e., both reads and
writes.
I performed 32 iterations per scenario this go-round.
I'll look into OSDB for further b
Okay. I updated my benchmark page with new numbers, which are the
result of extensive pgbench usage over this past week. In fact, I
modified pgbench (for both of the latest version of postgres) to be
able to accept multiple iterations as an argument and report the
results of each iteration as w
Tom,
People's opinions on pgbench may vary, so take what I say with a grain
of salt. Here are my thoughts:
1) Test with no less than 200 transactions per client. I've heard with
less than this, your results will vary too much with the direction of
the wind blowing. A high enough value will help
"Thomas F.O'Connell" <[EMAIL PROTECTED]> writes:
> http://www.sitening.com/pgbench.html
You need to run *many* more transactions than that to get pgbench
numbers that aren't mostly noise. In my experience 1000 transactions
per client is a rock-bottom minimum to get repeatable numbers; 1 per
i
I'm in the fortunate position of having a newly built database server
that's pre-production. I'm about to run it through the ringer with some
simulations of business data and logic, but I wanted to post the
results of some preliminary pgbench marking.
http://www.sitening.com/pgbench.html
To me,
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I received a copy of pgbench rewritten in Pro*C, which is similar to
> > embedded C. I think it was done so the same program could be tested on
> > Oracle and PostgreSQL.
>
> > Are folks interested in this code? Should it be put on
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I received a copy of pgbench rewritten in Pro*C, which is similar to
> embedded C. I think it was done so the same program could be tested on
> Oracle and PostgreSQL.
> Are folks interested in this code? Should it be put on gborg or in our
> /contrib/p
I received a copy of pgbench rewritten in Pro*C, which is similar to
embedded C. I think it was done so the same program could be tested on
Oracle and PostgreSQL.
Are folks interested in this code? Should it be put on gborg or in our
/contrib/pgbench?
--
Bruce Momjian
Greetings all,
I'm wondering is there a website where people can submit their pgbench
results along with their hardware and configuration's? If so where are they
at? I have yet to find any. I think this could be a very useful tool not
only for people looking at setting up a new server but for peo
69 matches
Mail list logo