On Tue, Feb 15, 2022 at 10:20 AM Robert Haas <robertmh...@gmail.com> wrote: > In general, deciding on new compression algorithms can feel a bit like > debating the merits of vi vs. emacs, or one political party vs. > another.
Really? I don't get that impression myself. (That's not important, though.) > What I imagine if this patch is accepted is that we (or our users) > will end up using lz4 for places where compression needs to be very > lightweight, and zstd for places where it's acceptable or even > desirable to spend more CPU cycles in exchange for better compression. Sounds reasonable to me. > Likewise, I still download the .tar.gz version of anything > that gives me that option, basically because I'm familiar with the > format and it's easy for me to just carry on using it -- and in a > similar way I expect a lot of people will be happy to continue to > compress backups with gzip for many years to come. Maybe, but it seems more likely that they'll usually do whatever the easiest reasonable-seeming thing is. Perhaps we need to distinguish between "container format" (e.g., a backup that is produced by a tool like pgBackrest, including backup manifest) and compression algorithm. Isn't it an incontrovertible fact that LZ4 is superior to pglz in every way? LZ4 is pretty much its successor. And so it seems totally fine to assume that users will always want to use the clearly better option, and that that option will be generally available going forward. TOAST compression is applied selectively already, based on various obscure implementation details, some of which are quite arbitrary. I know less about zstd, but I imagine that a similar dynamic exists there. Overall, everything that you've said makes sense to me. -- Peter Geoghegan