On Fri, Aug 12, 2011 at 5:39 PM, Kevin Grittner <kevin.gritt...@wicourts.gov> wrote: > Robert Haas <robertmh...@gmail.com> wrote: > >> That's one of the points I asked for feedback on in my original >> email. "How should the costing be done?" > > It seems pretty clear that there should be some cost adjustment. If > you can't get good numbers somehow on what fraction of the heap > accesses will be needed, I would suggest using a magic number based > on the assumption that half the heap access otherwise necessary will > be done. It wouldn't be the worst magic number in the planner. Of > course, real numbers are always better if you can get them.
It wouldn't be that difficult (I think) to make VACUUM and/or ANALYZE gather some statistics; what I'm worried about is that we'd have correlation problems. Consider a wide table with an index on (id, name), and the query: SELECT name FROM tab WHERE id = 12345 Now, suppose that we know that 50% of the heap pages have their visibility map bits set. What's the chance that this query won't need a heap fetch? Well, the chance is 50% *if* you assume that a row which has been quiescent for a long time is just as likely to be queried as one that has been recently inserted or updated. However, in many real-world use cases, nothing could be farther from the truth. What do we do about that? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers