Marcin Gozdalik <goz...@gmail.com> writes:
> Sometimes Postgres will choose very inefficient plan, which involves
> looping many times over same rows, producing hundreds of millions or
> billions of rows:

Yeah, this can happen if the outer side of the join has a lot of
duplicate rows.  The query planner is aware of that effect and will
charge an increased cost when it applies, so I wonder if your
statistics for the tables being joined are up-to-date.

                        regards, tom lane


Reply via email to