[PERFORM] query performance, where goes time?

2012-09-05 Thread Anibal David Acosta
Using explain analyze I saw that many of my queries run really fast, less
than 1 milliseconds, for example the analyze output of a simple query over a
table with 5millions of records return  "Total runtime: 0.078 ms"

 

But the real time is a lot  more, about 15 ms, in fact the pgadmin show this
value.

 

So, where goes the others 14.2 ms?

 

Network transfer (TCP)?

 

Or analyze Total runtime don't represent the query runtime?

 

Thanks!



Re: [PERFORM] query performance, where goes time?

2012-09-05 Thread Craig Ringer

On 09/06/2012 07:48 AM, Anibal David Acosta wrote:

Using explain analyze I saw that many of my queries run really fast,
less than 1 milliseconds, for example the analyze output of a simple
query over a table with 5millions of records return  "Total runtime:
0.078 ms"

But the real time is a lot  more, about 15 ms, in fact the pgadmin show
this value.

So, where goes the others 14.2 ms?


Client-side latency, time spent transmitting query results, and network 
latency.


You'll see much less difference in queries that take more meaningful 
amounts of time. This query is so fast that timing accuracy will be an 
issue on some systems, and so will scheduler jitter etc.


--
Craig Ringer



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [repost] Help me develop new commit_delay advice

2012-09-05 Thread Greg Smith

On 08/02/2012 02:02 PM, Peter Geoghegan wrote:

I made what may turn out to be a useful observation during the
development of the patch, which was that for both the tpc-b.sql and
insert.sql pgbench-tools scripts, a commit_delay of half of my
wal_sync_method's reported raw sync speed looked optimal.


I dug up what I wrote when trying to provide better advice for this 
circa V8.3.  That never really gelled into something worth publishing at 
the time.  But I see some similar patterns what what you're reporting, 
so maybe this will be useful input to you now.  That included a 7200RPM 
drive and a system with a BBWC.


In the BBWC case, the only useful tuning I found was to add a very small 
amount of commit_delay, possibly increasing the siblings too.  I was 
using http://benjiyork.com/blog/2007/04/sleep-considered-harmful.html to 
figure out the minimum sleep resolution on the server (3us at the time) 
and setting commit_delay to that; then increasing commit_siblings to 10 
or 20.  Jignesh Shah came back with something in the same sort of range 
then at 
http://jkshah.blogspot.com/2007/07/specjappserver2004-and-postgresql_09.html 
, setting commit_delay=10.


On the 7200RPM drive ~= 115 TPS, 1/2 of the drive's rotation was 
consistently what worked best for me across multiple tests too.  I also 
found lowering commit_siblings all the way to 1 could significantly 
improve the 2 client case when you did that.  Here's my notes from then:


commit_delay=4500, commit_siblings=1:  By waiting 1/2 a revolution if 
there's another active transaction, I get a small improvement at the 
low-end (around an extra 20 TPS between 2 and 6 clients), while not 
doing much damage to the higher client loads.  This might
be a useful tuning if your expected number of active clients are low, 
you don't have a good caching controller, but you'd like a little more 
oomph out of things.  The results for 7000 usec were almost as good. 
But in general, if you're stuck choosing between two commit_delay values 
you should use the smaller one as it will be less likely to have a bad 
impact on low client loads.


I also found considering a high delay only when a lot of clients were 
usually involved worked a bit better than a 1/2 rotation:


commit_delay=1, commit_siblings=20:  At higher client loads, there's 
almost invariably another commit coming right behind yours if you wait a 
bit.  Just plan to wait a bit more than an entire rotation between 
commits.  This buys me about an extra 30 TPS on the high client loads, 
which is a small fraction of an improvement (<5%) but might be worthwhile.


The fact that it seemed the optimal delay needed to vary a bit based on 
the number of the siblings was one of the challenges I kept butting into 
then.  Making the GUC settings even more complicated for this doesn't 
seem a productive step forward for the average user.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] exponential performance decrease in ISD transaction

2012-09-05 Thread Greg Smith

On 09/03/2012 01:27 PM, Jeff Janes wrote:

In any case, the behavior you report is exactly would would be
expected if autovacuum is not running.  The config file you posted
shows autovac is turned on, but I suspect that is not the config file
actually being used by the running server.


It's also important to note that:

1) autovacuum doesn't kick in until a moderate number of changes have 
been made.  Having it turned on doesn't mean it runs continuously.  The 
table can accumulate a lot of dead junk before autovacuum decides to 
clean things up.


2) When autovacuum *does* start, that can be a source of slowdowns itself.

I suspect that some level of table cleanup issue is here.  I would also 
bet that the performance seen initially is inflated because Linux's 
write cache is absorbing writes at the beginning.  The first few hundred 
megabytes or possibly more you write to the database don't wait for 
physical I/O at all.  Once that cache fills, though, performance drops 
hard.  Most benchmarks like this will start out really fast, then drop 
off dramatically once the write cache is full, and real-world disk 
performance limits progress.


In those cases, the slower performance after things have been running a 
while is actually the real sustainable speed of the server.  The much 
faster ones may only be possible when the write cache is relatively 
empty, which makes them representative more of burst performance.


A look at the "Dirty:" line in /proc/meminfo as the test runs will give 
you an idea if write cache filling is actually an issue here.  If that 
number just keeps going up and speeds keep on dropping, that's at least 
one cause here.  This could easily be both that and an autovacuum 
related too though.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] HELP!!!-----Need to Sql commands to monitoring Postgresql

2012-09-05 Thread charles_xie
Hi all,
 i have 5 servers that have been installing postgresql .In order to
know the postgresql working  status and monitor them ,moreover i don't want
to  use the monitor tools .I  want to use the SQL commands to monitoring
postgresql system . please suggest any SQL COMMANDS to work successfully.if
you have some good suggestion ,you can email to me
(charles@sanmina-sci.com) or sky :xqwbx163
   


best regards 

charles_xie




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/HELP-Need-to-Sql-commands-to-monitoring-Postgresql-tp5722548.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance