On Wed, Feb 9, 2022 at 1:21 AM Peter Geoghegan <p...@bowt.ie> wrote: >
> The btree side of this shouldn't care at all about dead tuples (in > general we focus way too much on dead tuples, and way too little on > pages). With bottom-up index deletion the number of dead tuples in the > index is just about completely irrelevant. It's entirely possible and > often even likely that 20%+ of all index tuples will be dead at any > one time, when the optimization perfectly preserves the index > structure. > > The btree side of the index AM API should be focussing on the growth > in index size, relative to some expectation (like maybe the growth for > whatever index on the same table has grown the least since last time, > accounting for obvious special cases like partial indexes). Perhaps > we'd give some consideration to bulk deletes, too. Overall, it should > be pretty simple, and should sometimes force us to do one of these > "dynamic mini vacuums" of the index just because we're not quite sure > what to do. There is nothing wrong with admitting the uncertainty. I agree with the point that we should be focusing more on index size growth compared to dead tuples. But I don't think that we can completely ignore the number of dead tuples. Although we have the bottom-up index deletion but whether the index structure will be preserved or not will depend upon what keys we are inserting next. So for example if there are 80% dead tuples but so far index size is fine then can we avoid vacuum? If we avoid vacuuming then it is very much possible that in some cases we will create a huge bloat e.g. if we are inserting some keys which can not take advantage of bottom up deletion. So IMHO the decision should be a combination of index size bloat and % dead tuples. Maybe we can add more weight to the size bloat and less weight to % dead tuple but we should not completely ignore it. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com