On Fri, Feb 4, 2022 at 1:46 PM Peter Geoghegan <p...@bowt.ie> wrote: > That should work. All you need is a table with several indexes, and a > workload consisting of updates that modify a column that is the key > column for only one of the indexes. I would expect bottom-up index > deletion to be 100% effective for the not-logically-modified indexes, > in the sense that there will be no page splits -- provided there are > no long held snapshots, and provided that the index isn't very small. > If it is small (think of something like the pgbench_branches pkey), > then even the occasional ANALYZE will act as a "long held snapshot" > relative to the size of the index. And so then you might get one page > split per original leaf page, but probably not a second, and very very > probably not a third. > > The constantly modified index will be entirely dependent on index > vacuuming here, and so an improved VACUUM design that allows that > particular index to be vacuumed more frequently could really improve > performance.
Thanks for checking my work here - I wasn't 100% sure I had the right idea. > BTW, it's a good idea to avoid unique indexes in test cases where > there is an index that you don't want to set LP_DEAD bits for, since > _bt_check_unique() tends to do a good job of setting LP_DEAD bits, > independent of the kill_prior_tuple thing. You can avoid using > kill_prior_tuple by forcing bitmap scans, of course. Thanks for this tip, too. -- Robert Haas EDB: http://www.enterprisedb.com