On Thu, Feb 11, 2010 at 7:39 AM, Bart Samwel <b...@samwel.tk> wrote: > On Thu, Feb 11, 2010 at 13:25, Pavel Stehule <pavel.steh...@gmail.com> > wrote: >> >> 2010/2/11 Bart Samwel <b...@samwel.tk>: >> > Perhaps this could be based on a (configurable?) ratio of observed >> > planning >> > time and projected execution time. I mean, if planning it the first time >> > took 30 ms and projected execution time is 1 ms, then by all means NEVER >> > re-plan. But if planning the first time took 1 ms and resulted in a >> > projected execution time of 50 ms, then it's relatively cheap to re-plan >> > every time (cost increase per execution is 1/50 = 2%), and the potential >> > gains are much greater (taking a chunk out of 50 ms adds up quickly). >> >> >> It could be a good idea. I don't belive to sophisticate methods. There >> can be a very simply solution. The could be a "limit" for price. More >> expensive queries can be replaned every time when the price will be >> over limit. > > I guess the required complexity depends on how variable planning costs are. > If planning is typically <= 2 ms, then a hard limit on estimated price is > useful and can be set as low as (the equivalent of) 15 ms. However, if > planning costs can be 50 ms, then the lowest reasonable "fixed" limit is > quite a bit larger than that -- and that does not solve the problem reported > earlier in this thread, where a query takes 30 ms using a generic plan and 1 > ms using a specialized plan. > > Anyhow, I have no clue how much time the planner takes. Can anybody provide > any statistics in that regard?
It depends a great deal on the query, which is one of the things that makes implementing this rather challenging. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers