Re: [PERFORM] PG writes a lot to the disk

2008-03-21 Thread Laurent Raufaste
set. > > Slony's inserts, updates, and deletes count as updates to the table as well. > Slony is shut down when I'm testing. -- Laurent Raufaste <http://www.glop.org/> -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] PG writes a lot to the disk

2008-03-21 Thread Laurent Raufaste
able files are modified during SELECT, and it can result in a lot of write if the queries plan work on a lot of rows. Thansk for your help, I'm relieved =) -- Laurent Raufaste <http://www.glop.org/> -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] PG writes a lot to the disk

2008-03-20 Thread Laurent Raufaste
2008/3/19, Laurent Raufaste <[EMAIL PROTECTED]>: > What does it writes so much in the base directory ? If it's some > temporary table or anything, how can I locate it so I can fix the > problem ? Thanks for your help everybody ! I fixed the problem by doing an ANALYZE to

[PERFORM] PG writes a lot to the disk

2008-03-19 Thread Laurent Raufaste
ax_stack_depth = 7MB default_statistics_target = 100 effective_cache_size = 20GB Thanks a lot for your advices ! -- Laurent Raufaste <http://www.glop.org/> -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] PG planning randomly ?

2008-02-28 Thread Laurent Raufaste
.31719108.69 rows=155897 width=533) (actual time=0.286..1.412 rows=5 loops=1) Filter: (path <@ '0.1.4108047'::ltree) -> Index Scan using _article_pkey on _article (cost=0.00..6.04 rows=1 width=41) (actual time=0.038..0.039 rows=1 loops=5) Inde

Re: [PERFORM] PG planning randomly ?

2008-02-27 Thread Laurent Raufaste
l runtime: 3.160 ms The runtime is ok, but the planned cost is huge, because the row count of the index scan estimates 100x more rows. After the ANALYZE it was like the others. If this wrong row count happens, I understand why the planner try to find an alternative plan in the first query I showed

Re: [PERFORM] PG planning randomly ?

2008-02-26 Thread Laurent Raufaste
2008/2/26, Tom Lane <[EMAIL PROTECTED]>: > "Laurent Raufaste" <[EMAIL PROTECTED]> writes: > > > 2008/2/26, Tom Lane <[EMAIL PROTECTED]>: > > >> If it's 8.2 or later then increasing the stats target for _comment.path > >> to 100

Re: [PERFORM] PG planning randomly ?

2008-02-26 Thread Laurent Raufaste
e it implies some LOCK on our replication cluster. Do you think the planner will act differently by using an ALTER TABLE rather then just the "SET default_statistics_target" command ? If so, I will try it =) Thanks. -- Laurent Raufaste <http://www.glop.org/>

Re: [PERFORM] PG planning randomly ?

2008-02-26 Thread Laurent Raufaste
2008/2/26, Laurent Raufaste <[EMAIL PROTECTED]>: > Hi, > > I'm having some issues with this simple query: > > SELECT > _comment.*, > _article.title AS article_title, > _article.reference AS article_reference > FROM > _comment >

[PERFORM] PG planning randomly ?

2008-02-26 Thread Laurent Raufaste
Filter: (path <@ '0.1.14666029'::ltree) -> Index Scan using _article_pkey on _article (cost=0.00..9.15 rows=1 width=41) (actual time=0.034..0.034 rows=1 loops=5) Index Cond: (_article.id = _comment.parent_id) Total runtime: 286416.339 ms Ho