Re: [PERFORM] Query improvement

2011-05-13 Thread Robert Haas
On Mon, May 2, 2011 at 3:58 AM, Claudio Freire wrote: > Hash joins are very inefficient if they require big temporary files. Hmm, that's not been my experience. What have you seen? I've seen a 64-batch hash join beat out a nested-loop-with-inner-indexscan, which I never woulda believed, but...

Re: [PERFORM] Query improvement

2011-05-08 Thread Mark
Thanks a lot for reply. Finally I have used UNION, but thanks for your help. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4378160.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com. -- Sent via pgsql-performance

Re: [PERFORM] Query improvement

2011-05-08 Thread Mark
Thanks for reply both UNION and JOINS helped. Mainly the UNION helped a lot. Now the query takes 1sec max. Thanks a lot. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4378157.html Sent from the PostgreSQL - performance mailing list archive a

Re: [PERFORM] Query improvement

2011-05-08 Thread Mark
Thanks for replies. Finally I have used UNION and JOINS, which helped. Mainly the UNION helped a lot. Now the query takes 1sec max. Thanks a lot. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4378163.html Sent from the PostgreSQL - performan

Re: [PERFORM] Query improvement

2011-05-03 Thread Marc Mamin
> On Mon, May 2, 2011 at 10:54 PM, Mark wrote: > > but the result have been worst than before. By the way is there a posibility > > to create beeter query with same effect? > > I have tried more queries, but this has got best performance yet. > > Well, this seems to be the worst part: > >

Re: [PERFORM] Query improvement

2011-05-03 Thread Claudio Freire
On Mon, May 2, 2011 at 10:54 PM, Mark wrote: > but the result have been worst than before. By the way is there a posibility > to create beeter query with same effect? > I have tried more queries, but this has got best performance yet. Well, this seems to be the worst part: (SELECT

Re: [PERFORM] Query improvement

2011-05-02 Thread Mark
Here is EXPLAIN ANALYZE: "Limit (cost=136568.00..136568.25 rows=100 width=185) (actual time=1952.174..1952.215 rows=100 loops=1)" " -> Sort (cost=136568.00..137152.26 rows=233703 width=185) (actual time=1952.172..1952.188 rows=100 loops=1)" "Sort Key: ((ts_rank(pc.textvector, to_tsquer

Re: [PERFORM] Query improvement

2011-05-02 Thread Claudio Freire
On Sun, May 1, 2011 at 12:23 PM, Mark wrote: > Now the problem. > When I try ANALYZE it shows: That's a regular explain... can you post an EXPLAIN ANALYZE? Hash joins are very inefficient if they require big temporary files. I usually work around that by disabling hash joins for the problematic

[PERFORM] Query improvement

2011-05-02 Thread Mark
Hi I have 3 tables page - revision - pagecontent CREATE TABLE mediawiki.page ( page_id serial NOT NULL, page_namespace smallint NOT NULL, page_title text NOT NULL, page_restrictions text, page_counter bigint NOT NULL DEFAULT 0, page_is_redirect smallint NOT NULL DEFAULT 0, page_is_n