Re: [PERFORM] Odd behavior with indices

2016-02-29 Thread Tom Lane
Matheus de Oliveira  writes:
> Em 26 de fev de 2016 4:44 PM, "joe meiring" 
> escreveu:
>> The same query for parameters is rather slow and does NOT use the index:
>> 
>> EXPLAIN ANALYZE
>> select *
>> from parameter
>> where exists (
>> select 1 from datavalue
>> where datavalue.parameter_id = parameter.id limit 1
>> );

> Please, could you execute both queries without the LIMIT 1 and show us the
> plans?

> LIMIT in the inner query is like a fence and it caps some optimizations
> available for EXISTS, you'd better avoid it and see if you get a proper
> semi-join plan then.

FWIW, PG >= 9.5 will ignore a LIMIT 1 inside an EXISTS, so that you get
the same plan with or without it.  But that does act as an optimization
fence in earlier releases.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Merge joins on index scans

2016-02-29 Thread Tom Lane
David Rowley  writes:
> On 27 February 2016 at 11:07, James Parks  wrote:
>> If you force the query planner to use a merge join on the above query, it
>> takes 10+ minutes to complete using the data as per below. If you force the
>> query planner to use a hash join on the same data, it takes ~200
>> milliseconds.

> I believe I know what is going on here, but can you please test;
> SELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a ON b.a_id = a.id AND
> a.nonce = ?) ORDER BY b.id ASC;
> using the merge join plan.

> If this performs much better then the problem is due to the merge join
> mark/restore causing the join to have to transition through many
> tuples which don't match the a.nonce = ? predicate.

Clearly we are rescanning an awful lot of the "a" table:

 ->  Index Scan using a_pkey on a  (cost=0.00..26163.20 rows=843 
width=8) (actual time=5.706..751385.306 rows=83658 loops=1)
   Filter: (nonce = 64)
   Rows Removed by Filter: 2201063696
   Buffers: shared hit=2151024418 read=340
   I/O Timings: read=1.015

The other explain shows a scan of "a" reading about 490k rows and
returning 395 of them, so there's a factor of about 200 re-read here.
I wonder if the planner should have inserted a materialize node to
reduce that.

However, I think the real problem is upstream of that: if that indexscan
was estimated at 26163.20 units, how'd the mergejoin above it get costed
at only 7850.13 units?  The answer has to be that the planner thought the
merge would stop before reading most of "a", as a result of limited range
of b.a_id.  It would be interesting to look into what the actual maximum
b.a_id value is.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance