On Mon, Mar 14, 2011 at 8:33 PM, Robert Haas <robertmh...@gmail.com> wrote: > I'm not sure about that either, although I'm not sure of the reverse > either. But before I invest any time in it, do you have any other > good ideas for addressing the "it stinks to scan the entire index > every time we vacuum" problem? Or for generally making vacuum > cheaper?
You could imagine an index am that instead of scanning the index just accumulated all the dead tuples in a hash table and checked it before following any index link. Whenever the hash table gets too big it could do a sequential scan and prune any pointers to those tuples and start a new hash table. That would work well if there are frequent vacuums finding a few tuples per vacuum. It might even allow us to absorb dead tuples from "retail" vacuums so we could get rid of line pointers earlier. But it would involve more WAL-logged operations and incur an extra overhead on each index lookup. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers