On Mon, Mar 22, 2021 at 7:27 AM Peter Geoghegan <p...@bowt.ie> wrote: > > On Wed, Mar 10, 2021 at 5:34 PM Peter Geoghegan <p...@bowt.ie> wrote: > > Here is another bitrot-fix-only revision, v9. Just the recycling patch > > again. > > I committed the final nbtree page deletion patch just now -- the one > that attempts to make recycling happen for newly deleted pages. Thanks > for all your work on patch review, Masahiko!
You're welcome! Those are really good improvements. By this patch series, btree indexes became like hash indexes in terms of amvacuumcleanup. We do an index scan at btvacuumcleanup() in the two cases: metapage upgrading and more than 5% deleted-but-not-yet-recycled pages. Both cases seem rare cases. So do we want to disable parallel index cleanup for btree indexes like hash indexes? That is, remove VACUUM_OPTION_PARALLEL_COND_CLEANUP from amparallelvacuumoptions. IMO we can live with the current configuration just in case where the user runs into such rare situations (especially for the latter case). In most cases, parallel vacuum workers for index cleanup might exit with no-op but the side-effect (wasting resources and overhead etc) would not be big. If we want to enable it only in particular cases, we would need to have another way for index AM to tell lazy vacuum whether or not to allow a parallel worker to process the index at that time. What do you think? I’m not sure we need changes but I think it’s worth discussing here. Regards, -- Masahiko Sawada EDB: https://www.enterprisedb.com/