Le 26/04/2024 à 04:24, Laurenz Albe a écrit :
On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:
I believe that the underlying problem here can be summarized in this
way: just because I'm OK with 2MB of bloat in my 10MB table doesn't
mean that I'm OK with 2TB of bloat in my 10TB table. One reason for
this is simply that I can afford to waste 2MB much more easily than I
can afford to waste 2TB -- and that applies both on disk and in
memory.

I don't find that convincing.  Why are 2TB of wasted space in a 10TB
table worse than 2TB of wasted space in 100 tables of 100GB each?


Good point, but another way of summarizing the problem would be that the autovacuum_*_scale_factor parameters work well as long as we have a more or less evenly distributed access pattern in the table.

Suppose my very large table gets updated only for its 1% most recent rows. We probably want to decrease autovacuum_analyze_scale_factor and autovacuum_vacuum_scale_factor for this one.

Partitioning would be a good solution, but IMHO postgres should be able to handle this case anyway, ideally without per-table configuration.


Reply via email to