[PERFORM] pgCluster and postgres 8.1

2006-03-09 Thread Javier Somoza






    Hi all !

    I wanna test my system performance when using pgCluster.
    I'm using postgreSQL 8.1.0 and i've downloaded pgcluster-1.5.0rc7 
    and pgcluster-1.5.0rc7-patch.

    Do i need to recompile postgreSQL with the patch?
    Can i use pgcluster-1.5 with this version of postgreSQL?

    Thx all




Re: [PERFORM] firebird X postgresql 8.1.2 windows, performance comparison

2006-03-09 Thread Carlos Henrique Reimer
Andre,     Could not Postgresql file cache being killed by firebird activity?Haven´t you tried decrease ramdom_page_cost to 3 or 2?It would be better if only one person will make configuration changes, otherwise it will be difficult to measure each configuration change impact.ReimerAndre Felipe Machado <[EMAIL PROTECTED]> escreveu:  Hello,Many thanks for the valuable suggestions and insights.The defaults enable_bitmapscan and enable_seqscan were altered by myfriend. He already re enabled them (maybe even while I was trying someof the queries).The machine is a P4 3.2GHz, 1 GBram, sata hd, windows 2000. I did notused pg on win before to have any advice to my friend.The previously attached file contains SOME relevant info from the psql
 session,
 in order to not clutter file.When some server parameter was modified (at least by me) and serverrestarted, a new sholl parameter was issued to show the new value.Firebird is running at the same machine.As you can see by the session log, indexes were created on the columnsused and tables was first clustered on the indexes actually used by thequery.The subsequent cluster commands only recluster on the same indexespreviously clustered.shared_buffers was increased from 1000 to 16384 pageseffective_cache_size was increased from 1000 to 65535 pages and at thefinal steps REDUCED to 8192 pageswork_mem was increased from 1024 first to 16384 KB and then to 65535KB.The first 2 parameters reduced time 18%.work_mem reduced time almost 66%.But work_mem easily can exhaust ram with many users connected, as eachconnection query will use this amount of memory (if I can remember).How much it can grow at this 1 gbram win
 machine?Some of the docs I already read suggested that indexes should beentirely contained in ram. How to dimension the parameters?Other docs adviced that some memory parameters could actually degradeperformance if too big. There are peak points at the performance curveby adjusting mem parameters.I hope tomorrow execute explain with the bitmapscan and seqscan enabled.bitmapscans are almost always faster?The data, as far I know, are a sample real app data (hey, if and when inproduction it will be even large?). They are almost true random as myfriend informed, and according to him, cluster should not really be ofbenefit. It seems confirmed by the various explain analyze commandsbefore and after clustering.Any suggestions? Do you see some obvious error on the steps at theprevious session log file?It seems that Firebird windows can use adequately as much ram it findsand postgresql windows can not. How dimens
 ion ram
 to the indexes? Onlyby trial and error? I tried some suggested values found at some tuningdocs suitable to the available system ram.Thanks Andre Felipe---(end of broadcast)---TIP 3: Have you checked our extensive FAQ?http://www.postgresql.org/docs/faq
		 
Yahoo! Acesso Grátis 
Internet rápida e grátis. Instale o discador agora!

Re: [PERFORM] firebird X postgresql 8.1.2 windows, performance comparison

2006-03-09 Thread PFC




I hope tomorrow execute explain with the bitmapscan and seqscan enabled.
bitmapscans are almost always faster?


	Like all the rest, they're just a tool, which works great when used in  
its intended purpose :


	- Fetching just a few percent of the rows from a table is better served  
by an index scan
	- Fetching a lot of rows (>30-50%) from a table is better served by a seq  
scan

- Bitmap scan comes in between and it's a very welcome addition.

	Also Bitmap scan will save your life if you have complex searches, like  
if you run a dating site and have an index on blondes and an index on boob  
size, because it can use several indexes in complex AND/OR queries.


	Common wisdom says simpler databases can be faster than postgres on  
simple queries.


Reality check with pg 8.1 driven by PHP :

- SELECT 1
mysql 5 ~ 42 us
postgres~ 70 us

- SELECT * FROM users WHERE id=1
mysql 5 ~ 180 us
postgres~  160 us

	Of course people doing stupid things, like using the database to keep a  
hit counter on their website which is updated on every hit, will say that  
postgres is slow.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] pg_reset_stats + cache I/O %

2006-03-09 Thread mcelroy, tim
Title: RE: [PERFORM] pg_reset_stats + cache I/O %





Sorry I realized your fears :)


PostgreSQL is a new (last four months) install here and I'm responsible for it.  Great DB and I enjoy working with it a lot and learning the nuances of it.  Keep in mind that the management are 'old-time' system folks who love charts showing system and in this case DB performance.  I'm basically just using the out-of-the-box defaults in my postgresql.conf file and that seems to be working so far.  But as the DB grows I just need a way to prove the DB is functioning properly when apps get slow.  You know the old you're guilty till proven innocent syndrome  Ok enough on that.  

Yes, thank you we try to keep on the ball regarding system monitoring.  BTW - I'm still waiting to see if anyone out there can say yea or nay if the SQL I wrote is a valid indicator of overall cache % hit?

> SELECT 100 - round((blks_hit::numeric / (blks_hit::numeric +
> blks_read::numeric)) * 100,2)
> AS "Cache % Hit"
> FROM pg_stat_database
> WHERE datname = 'Fix1';
> 
> 
> 
> Cache % Hit
> 
>    98.06
> (1 row)


Thank you,
Tim



 -Original Message-
From:   Jim C. Nasby [mailto:[EMAIL PROTECTED]] 
Sent:   Wednesday, March 08, 2006 10:24 PM
To: mcelroy, tim
Cc: 'pgsql-performance@postgresql.org'
Subject:    Re: [PERFORM] pg_reset_stats + cache I/O %


On Wed, Mar 08, 2006 at 01:35:35PM -0500, mcelroy, tim wrote:
> I actually need this info as I was tasked by management to provide it.  Not
> sure if they understand that or not, I do but management does like to see
> how well the system and its components are performing.  Also, I would
> utilize these results to test any cache tuning changes I may make.  


What I feared. While monitoring cache hit % over time isn't a bad idea,
it's less than half the picture, which makes fertile ground for
optimizing for some mythical target instead of actual system
performance. If the "conclusion" from these numbers is that
shared_buffers needs to get set larger than min(5, 10% of memory)
I'd very seriously re-consider how performance tuning is being done.


But hopefully I'm just being paranoid and you guys are just doing a
great job of monitoring things and keeping on the ball. :)
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461





Re: [PERFORM] Hanging queries on dual CPU windows

2006-03-09 Thread Jan de Visser
I have more information on this issue.

First of, the problem now happens after about 1-2 hours, as opposed to the 6-8 
I mentioned earlier. Yey for shorter test cycles.

Furtermore, it does not happen on Linux machines, both single CPU and dual 
CPU, nor on single CPU windows machines. We can only reproduce on a dual CPU 
windows machine, and if we take one CPU out, it does not happen.

I executed the following after it hung:

db=# select l.pid, c.relname, l.mode, l.granted, l.page, l.tuple 
from pg_locks l, pg_class c where c.oid = l.relation order by l.pid;

Which showed me that several transactions where waiting for a particular row 
which was locked by another transaction. This transaction had no pending 
locks (so no deadlock), but just does not complete and hence never 
relinquishes the lock.

What gives? has anybody ever heard of problems like this on dual CPU windows 
machines?

jan



On Monday 06 March 2006 09:38, Jan de Visser wrote:
> Hello,
>
> While doing performance tests on Windows Server 2003 we observed to
> following two problems.
>
> Environment: J2EE application running in JBoss application server, against
> pgsql 8.1 database. Load is caused by a smallish number of (very) complex
> transactions, typically about 5-10 concurrently.
>
> The first one, which bothers me the most, is that after about 6-8 hours the
> application stops processing. No errors are reported, neither by the JDBC
> driver nor by the server, but when I kill the application server, I see
> that all my connections hang in a SQL statements (which never seem to
> return):
>
> 2006-03-03 08:17:12 4504 6632560 LOG:  duration: 45087000.000 ms
>  statement: EXECUTE   [PREPARE:  SELECT objID FROM objects WHERE
> objID = $1 FOR UPDATE]
>
> I think I can reliably reproduce this by loading the app, and waiting a
> couple of hours.

-- 
--
Jan de Visser                     [EMAIL PROTECTED]

                Baruk Khazad! Khazad ai-menu!
--

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Hanging queries on dual CPU windows

2006-03-09 Thread Tom Lane
Jan de Visser <[EMAIL PROTECTED]> writes:
> Furtermore, it does not happen on Linux machines, both single CPU and dual 
> CPU, nor on single CPU windows machines. We can only reproduce on a dual CPU 
> windows machine, and if we take one CPU out, it does not happen.
> ...
> Which showed me that several transactions where waiting for a particular row 
> which was locked by another transaction. This transaction had no pending 
> locks (so no deadlock), but just does not complete and hence never 
> relinquishes the lock.

Is the stuck transaction still consuming CPU time, or just stopped?

Is it possible to get a stack trace from the stuck process?  I dunno
if you've got anything gdb-equivalent under Windows, but that's the
first thing I'd be interested in ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Hanging queries on dual CPU windows

2006-03-09 Thread Jan de Visser
On Thursday 09 March 2006 15:10, Tom Lane wrote:
> Jan de Visser <[EMAIL PROTECTED]> writes:
> > Furtermore, it does not happen on Linux machines, both single CPU and
> > dual CPU, nor on single CPU windows machines. We can only reproduce on a
> > dual CPU windows machine, and if we take one CPU out, it does not happen.
> > ...
> > Which showed me that several transactions where waiting for a particular
> > row which was locked by another transaction. This transaction had no
> > pending locks (so no deadlock), but just does not complete and hence
> > never relinquishes the lock.
>
> Is the stuck transaction still consuming CPU time, or just stopped?

CPU drops off. In fact, that's my main clue something's wrong ;-)

>
> Is it possible to get a stack trace from the stuck process?  I dunno
> if you've got anything gdb-equivalent under Windows, but that's the
> first thing I'd be interested in ...

I wouldn't know. I'm hardly a windows expert. Prefer not to touch the stuff, 
myself. Can do some research though...

>
>   regards, tom lane

jan

-- 
--
Jan de Visser                     [EMAIL PROTECTED]

                Baruk Khazad! Khazad ai-menu!
--

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Hanging queries on dual CPU windows

2006-03-09 Thread Magnus Hagander
> Is it possible to get a stack trace from the stuck process?  
> I dunno if you've got anything gdb-equivalent under Windows, 
> but that's the first thing I'd be interested in ...

Try Process Explorer from www.sysinternals.com.

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] pgCluster and postgres 8.1

2006-03-09 Thread Chris

Javier Somoza wrote:

I wanna test my system performance when using pgCluster.
I'm using postgreSQL 8.1.0 and i've downloaded pgcluster-1.5.0rc7
and pgcluster-1.5.0rc7-patch.

Do i need to recompile postgreSQL with the patch?
Can i use pgcluster-1.5 with this version of postgreSQL?


What does the documentation that comes with the patch say?

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Hanging queries on dual CPU windows

2006-03-09 Thread Jan de Visser
On Thursday 09 March 2006 15:10, Tom Lane wrote:
> Is it possible to get a stack trace from the stuck process?  I dunno
> if you've got anything gdb-equivalent under Windows, but that's the
> first thing I'd be interested in ...

Here ya go:

http://www.devisser-siderius.com/stack1.jpg
http://www.devisser-siderius.com/stack2.jpg
http://www.devisser-siderius.com/stack3.jpg

There are three threads in the process. I guess thread 1 (stack1.jpg) is the 
most interesting.

I also noted that cranking up concurrency in my app reproduces the problem in 
about 4 minutes ;-)

With thanks to Magnus Hagander for the Process Explorer hint.

jan

-- 
--
Jan de Visser                     [EMAIL PROTECTED]

                Baruk Khazad! Khazad ai-menu!
--

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] firebird X postgresql 8.1.2 windows, performance comparison

2006-03-09 Thread David Brown

Andre Felipe Machado wrote:


It seems that Firebird windows can use adequately as much ram it finds
and postgresql windows can not.
 

PostgreSQL relies on the OS cache to utilize RAM. Make sure that most of 
the RAM is 'available' so Windows can do its thing.


effective_cache_size should be set correspondingly high - at least 65535.

shared_buffers should be as low as you can get away with (allowing for 
multiple users). 16384 is 12.5% of your RAM and far too high.


AFAIK, PostgreSQL still doesn't differentiate between index blocks and 
data blocks.



work_mem reduced time almost 66%.
But work_mem easily can exhaust ram with many users connected, as each
connection query will use this amount of memory (if I can remember).
How much it can grow at this 1 gbram win machine?
 



work_mem has to be just big enough to allow hash joins to operate 
efficiently. This varies from query to query and can be set in your code 
accordingly. However, the 1024 default is just too low for most 
applications and you'll probably find even 4096 is a huge improvement. 
You need to find the minimum that delivers acceptable performance in 
most queries and boost it for selected queries as required.


BTW, which version of Firebird is this?

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] Using materialized views for commonly-queried subsets

2006-03-09 Thread Casey Allen Shobe
I typed up a description of a situation where the only viable option to 
improve performance was to use a materialized view, which, when implemented, 
was found to improve performance twenty-sevenfold, even with a fairly small 
amount of excess data (which is antipated to grow).  I thought this might be 
of use to anybody else in a similar situation, so I thought I'd post it here.

http://community.seattleserver.com/viewtopic.php?t=11

Feel free to reproduce as you see fit.

Cheers,
-- 
Casey Allen Shobe | [EMAIL PROTECTED] | 206-381-2800
SeattleServer.com, Inc. | http://www.seattleserver.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] Process Time X200

2006-03-09 Thread NbForYou



Hello,
 
I could need some help.
 
 I have a Postgresql 
database When i do a query on my homeserver the result is given 
back fast but when i do the same query on my webhost server the query is useless 
because of the processtime (200 times slower  
(56366.20 / 281.000 = 200.59) ). My Pc 
is just a simple pc in reference to the high quality systems my webhost 
uses.
I have included the query plan and the 
table
Query:
explain analyze SELECT B.gegevensnaam AS boss, E.gegevensnaam FROM 
nieuw_gegevens AS E LEFT OUTER JOIN nieuw_gegevens AS B ON B.lft 
= (SELECT MAX(lft) FROM nieuw_gegevens AS S WHERE E.lft > S.lft 
AND E.lft < S.rgt) order by boss, gegevensnaam On 
the WEBHOST: QUERY PLAN 
Sort  (cost=1654870.86..1654871.87 rows=403 width=38) (actual 
time=56365.13..56365.41 rows=403 loops=1)   Sort Key: 
b.gegevensnaam, e.gegevensnaam   ->  Nested 
Loop  (cost=0.00..1654853.42 rows=403 width=38) (actual 
time=92.76..56360.79 rows=403 loops=1) 
Join Filter: ("inner".lft = 
(subplan)) 
->  Seq Scan on 
nieuw_gegevens e  (cost=0.00..8.03 rows=403 width=19) (actual 
time=0.03..1.07 rows=403 loops=1) 
->  Seq Scan on 
nieuw_gegevens b  (cost=0.00..8.03 rows=403 width=19) (actual 
time=0.00..0.79 rows=403 loops=403) 
SubPlan 
  ->  Aggregate  (cost=10.16..10.16 
rows=1 width=4) (actual time=0.34..0.34 rows=1 loops=162409) 
->  Seq 
Scan on nieuw_gegevens s  (cost=0.00..10.04 rows=45 width=4) (actual 
time=0.20..0.33 rows=2 loops=162409) 
  Filter: 
(($0 > lft) AND ($0 < rgt)) Total runtime: 56366.20 msec 11 
row(s) Total runtime: 56,370.345 ms On my HOMESERVER: 
QUERY PLAN Sort  (cost=12459.00..12461.04 rows=813 
width=290) (actual time=281.000..281.000 rows=403 loops=1)   Sort 
Key: b.gegevensnaam, e.gegevensnaam   ->  Merge Left 
Join  (cost=50.94..12419.71 rows=813 width=290) (actual 
time=281.000..281.000 rows=403 loops=1) 
Merge Cond: 
("outer"."?column3?" = "inner".lft) 
->  Sort  (cost=25.47..26.48 
rows=403 width=149) (actual time=281.000..281.000 rows=403 loops=1) 
  Sort 
Key: (subplan) 
  ->  Seq 
Scan on nieuw_gegevens e  (cost=0.00..8.03 rows=403 width=149) (actual 
time=0.000..281.000 rows=403 loops=1) 
SubPlan 
  ->  Aggregate  (cost=10.16..10.16 
rows=1 width=4) (actual time=0.697..0.697 rows=1 loops=403) 
->  Seq 
Scan on nieuw_gegevens s  (cost=0.00..10.05 rows=45 width=4) (actual 
time=0.308..0.658 rows=2 loops=403) 
  Filter: 
(($0 > lft) AND ($0 < rgt)) 
->  Sort  (cost=25.47..26.48 
rows=403 width=149) (actual time=0.000..0.000 rows=770 loops=1) 
  Sort 
Key: b.lft 
  ->  Seq 
Scan on nieuw_gegevens b  (cost=0.00..8.03 rows=403 width=149) (actual 
time=0.000..0.000 rows=403 loops=1) Total runtime: 281.000 ms 15 
row(s) Total runtime: 287.273 ms As you can see the 
query isn't useful anymore because of the processtime. Please Also 
notice that both systems use a different query plan.  
Also on the webhost we have a loop of 162409 (403 rows * 403 rows).
Both systems also use a different postgresql version. But I cannot believe 
that the performance difference between 1 version could be this big regarding 
self outer join queries!Table CREATE TABLE nieuw_gegevens ( 
  gegevensid int4 NOT NULL DEFAULT 
nextval('nieuw_gegevens_gegevensid_seq'::text),   gegevensnaam 
varchar(255) NOT NULL,   lft int4 NOT NULL,   rgt 
int4 NOT NULL,   keyword text,   CONSTRAINT 
nieuw_gegevens_pkey PRIMARY KEY (gegevensid),   CONSTRAINT 
nieuw_gegevens_gegevensnaam_key UNIQUE (gegevensnaam) ) WITH OIDS; 
Does anyone now how to resolve this problem? Could it be that the 
configuration of the webhost postgresql could me wrong?
 
thank you