On Wed, Jan 25, 2023 at 4:43 PM Andres Freund <and...@anarazel.de> wrote: > I unfortunately haven't been able to keep up with the thread and saw this just > now. But I've expressed the concern below several times before, so it > shouldn't come as a surprise.
You missed the announcement 9 days ago, and the similar clear signalling of a commit from yesterday. I guess I'll need to start personally reaching out to you any time I commit anything in this area in the future. I almost considered doing that here, in fact. > The most common problematic scenario I see are tables full of rows with > limited lifetime. E.g. because rows get aggregated up after a while. Before > those rows practically never got frozen - but now we'll freeze them all the > time. Fundamentally, the choice to freeze or not freeze is driven by speculation about the needs of the table, with some guidance from the user. That isn't new. It seems to me that it will always be possible for you to come up with an adversarial case that makes any given approach look bad, no matter how good it is. Of course that doesn't mean that this particular complaint has no validity; but it does mean that you need to be willing to draw the line somewhere. In particular, it would be very useful to know what the parameters of the discussion are. Obviously I cannot come up with an algorithm that can literally predict the future. But I may be able to handle specific cases of concern better, or to better help users cope in whatever way. > I whipped up a quick test: 15 pgbench threads insert rows, 1 psql \while loop > deletes older rows. Can you post the script? And what setting did you use? > Workload fits in s_b: > > Autovacuum on average generates between 1.5x-7x as much WAL as before, > depending on how things interact with checkpoints. And not just that, each > autovac cycle also takes substantially longer than before - the average time > for an autovacuum roughly doubled. Which of course increases the amount of > bloat. Anything that causes an autovacuum to take longer will effectively make autovacuum think that it has removed more bloat than it really has, which will then make autovacuum less aggressive when it really should be more aggressive. That's a preexisting issue, that needs to be accounted for in the context of this discussion. > This is significantly worse than I predicted. This was my first attempt at > coming up with a problematic workload. There'll likely be way worse in > production. As I said in the commit message, the current default for vacuum_freeze_strategy_threshold is considered low, and was always intended to be provisional. Something that I explicitly noted would be reviewed after the beta period is over, once we gained more experience with the setting. I think that a far higher setting could be almost as effective. 32GB, or even 64GB could work quite well, since you'll still have the FPI optimization. -- Peter Geoghegan