> One objection to this is that after moving "off the gold standard" of > 1.0 = one page fetch, there is no longer any clear meaning to the > cost estimate units; you're faced with the fact that they're just an > arbitrary scale. I'm not sure that's such a bad thing, though. For > instance, some people might want to try to tune their settings so that > the estimates are actually comparable to milliseconds of real time.
Any chance that the correspondence to time could be made a part of the design on purpose and generally advise people to follow that rule? If we could tell people to run *benchmark* and use those numbers directly as a first approximation tuning, it could help quite a bit for people new to PostgreSQL experiencing poor performance. effective_cache_size then becomes essentially the last hand-set variable for medium sized installations. -- ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match