On Wed, Jan 30, 2008 at 09:13:37PM -0500, Christopher Browne wrote: > There seems to be *plenty* of evidence out there that the performance > penalty would NOT be "essentially zero." > > Tom points out: > eqjoinsel(), for one, is O(N^2) in the number of MCV values kept. > > It seems to me that there are cases where we can *REDUCE* the > histogram width, and if we do that, and then pick and choose the > columns where the width increases, the performance penalty may be > "yea, verily *actually* 0." > > This fits somewhat with Simon Riggs' discussion earlier in the month > about Segment Exclusion; these both represent cases where it is quite > likely that there is emergent data in our tables that can help us to > better optimize our queries.
This is all still hand-waving until someone actually measures what the impact of the stats target is on planner time. I would suggest actually measuring that before trying to invent more machinery. Besides, I think you'll need that data for the machinery to make an intelligent decision anyway... BTW, with autovacuum I don't really see why we should care about how long analyze takes, though perhaps it should have a throttle ala vacuum_cost_delay. -- Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828
pgpbkSuadbMUY.pgp
Description: PGP signature