mlw <[EMAIL PROTECTED]> writes: > For instance: say we have two similarly performing plans, close to one another, > say within 20%, one plan uses an index, and one does not. It is unlikely that > the index plan will perform substantially worse than the non-index plan, right?
This seems to be the crux of the argument ... but I've really seen no evidence to suggest that it's true. The downside of improperly picking an indexscan plan is *not* any less than the downside of improperly picking a seqscan plan, in my experience. It does seem (per Thomas' earlier observation) that we get more complaints about failure to use an index scan than the other case. Prior to 7.2 it was usually pretty obviously traceable to overestimates of the number of rows to be retrieved (due to inadequate data statistics). In 7.2 that doesn't seem to be the bottleneck anymore. I think now that there may be some shortcoming in the planner's cost model or in the adjustable parameters for same. But my reaction to that is to try to figure out how to fix the cost model. I certainly do not feel that we've reached a dead end in which the only answer is to give up and stop trusting the cost-based optimization approach. > Now, given the choice of the two strategies on a table, both pretty close to > one another, the risk of poor performance for using the index scan is minimal > based on the statistics, but the risk of poor performance for using the > sequential scan is quite high on a large table. You keep asserting that, and you keep providing no evidence. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html