On Fri, Apr 26, 2024 at 4:43 AM Michael Banck <mba...@gmx.net> wrote: > > I believe that the defaults should work well in moderately sized databases > > with moderate usage characteristics. If you have large tables or a high > > number of transactions per second, you can be expected to make the effort > > and adjust the settings for your case. Adding more GUCs makes life *harder* > > for the users who are trying to understand and configure how autovacuum > > works. > > Well, I disagree to some degree. I agree that the defaults should work > well in moderately sized databases with moderate usage characteristics. > But I also think we can do better than telling DBAs to they have to > manually fine-tune autovacuum for large tables (and frequenlty > implementing by hand what this patch is proposed, namely setting > autovacuum_vacuum_scale_factor to 0 and autovacuum_vacuum_threshold to a > high number), as this is cumbersome and needs adult supervision that is > not always available. Of course, it would be great if we just slap some > AI into the autovacuum launcher that figures things out automagically, > but I don't think we are there, yet. > > So this proposal (probably along with a higher default threshold than > 500000, but IMO less than what Robert and Nathan suggested) sounds like > a stop forward to me. DBAs can set the threshold lower if they want, or > maybe we can just turn it off by default if we cannot agree on a sane > default, but I think this (using the simplified formula from Nathan) is > a good approach that takes some pain away from autovacuum tuning and > reserves that for the really difficult cases.
I agree with this. If having an extra setting substantially reduces the number of cases that require manual tuning, it's totally worth it. And I think it will. To be clear, I don't think this is the biggest problem with the autovacuum algorithm, not by quite a bit. But it's a relatively easy one to fix. -- Robert Haas EDB: http://www.enterprisedb.com