>>>
>>>
>>> Right.  I was more wondering why it switched over to a generic plan, as
>>> you've stated, like clockwork starting with the 6th execution run.
>>>
>>>
>> That's a hard-coded value. The first 5 executions are re-planned using
>> the actual parameter values, and then we try generating a generic plan and
>> see if it's cheaper than the non-generic one. You can disable that, though.
>
>
>
> So on that note, in the planner's eyes, starting with the 6th execution,
> it looks like the planner still thinks that the generic plan will perform
> better than the non-generic one, which is why it keeps using the generic
> plan from that point forward?
>
> Similar to the parallel scans, any chance of the planner possibly being
> enhanced in the future to come to a better conclusion as to whether, or
> not, the generic plan will perform better than the non-generic plan?  :-)
>

all is based on estimations, and when estimations are not correct, then ..
The current solution is fart to perfect, but nobody goes with better ideas
:( Statistic based planners is best available technology, unfortunately
with lot of gaps.

There are not any statistic where any tuple is in database, so a precious
estimation of EXISTS is hard (impossible). Similar issue is with LIMIT. It
can be nice, but I don't expect any significant changes in this area -
maybe some tuning step by step of some parameters.

Regards

Pavel


>
>
>
>>
>>
>> regards
>>
>> --
>> Tomas Vondra                  http://www.2ndQuadrant.com
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>
>
> Thanks for all the help!  I really appreciate it!
>
> Ryan
>
>

Reply via email to