merge join (cost=0.00..348650.65 rows=901849 width=12)
 merge cond {blah}
 join filter {blah}
index scan using {blah index on blah} (cost=0.00..289740.65 rows=11259514 width=8) index scan using {blah index on blah} (cost=0.00..17229.93 rows=902085 width=8)

This query takes about 3 minutes to run and I'm trying to figure out why. From a tutorial and the docs, I gather that the "..largenum" part is the number of page reads required, so I understand where 289740 and 17229 come from. But what about 348650 page reads for the "merge join"? My conjecture is that the joined keys are being stored on disk (if that's how the internals of postgresql works) and have to be re-read for the rest of the query. Is that right? Does that mean I could speed this up by giving more RAM to store it in?

When I do EXPLAIN ANALYZE, the actual values come out like this:

merge join: (actual time=170029.404..170029.404)
index scan: (actual time=27.653..84373.805)
index scan: (actual time=45.681..7026.928)

This seems to confirm that it's the final "merge join" that takes forever. Because it is writing to and reading from disk?

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to