On Mon, Feb 15, 2010 at 7:51 PM, Jeroen Vermeulen <j...@xs4all.nl> wrote: > AFAIC a statement could go to "re-planning mode" if the shortest execution > time for the generic plan takes at least 10x longer than the longest > planning time. That gives us a decent shot at finding statements where > re-planning is a safe bet. A parameter that we or the user would have to > tweak would just be a fragile approximation of that.
So in principle I agree with this idea. I think a conservative value for the constant would be more like 100x though. If I told you we had an easy way to speed all your queries up by 10% by caching queries but were just choosing not to then I think you would be unhappy. Whereas if I told you we were spending 1% of the run-time planning queries I think most people would not be concerned. There's a second problem though. We don't actually know how long any given query is going to take to plan or execute. We could just remember how long it took to plan and execute last time or how long it took to plan last time and the average execution time since we cached that plan. Perhaps we should track the stddev of the execution plan, or the max execution time of the plan? Ie there are still unanswered questions about the precise heuristic to use but I bet we can come up with something reasonable. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers