On Thu, May 27, 2021 at 1:29 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > Yeah. My belief here is that users might bother to change > default_toast_compression, or that we might do it for them in a few > years, but the gains from doing so are going to be only incremental. > That being the case, most DBAs will be content to allow the older > compression method to age out of their databases through routine row > updates. The idea that somebody is going to be excited enough about > this to run a downtime-inducing VACUUM FULL doesn't really pass the > smell test.
That was my original understanding of your position, FWIW. I agree with all of this. > That doesn't make LZ4 compression useless, by any means, but it does > suggest that we shouldn't be adding overhead to VACUUM FULL for the > purpose of easing instantaneous switchovers. Right. More generally, there often seems to be value in under-specifying what a compression option does. Or in treating them as advisory. You mentioned the history of SET STORAGE, which seems very relevant. I am reminded of the example of B-Tree deduplication with unique indexes, where we selectively apply the optimization based on page-level details. Deduplication isn't usually useful in unique indexes (for the obvious reason), though occasionally it is extremely useful. I think that there might be a variety of things that work a little like that. It can help with avoiding unnecessary dump and reload hazards, too. I am interested in hearing the *principle* behind Robert's position. This whole area seems like something that might have at least a couple of different schools of thought. If it is then I'd sincerely like to hear the other side of the argument. -- Peter Geoghegan