On Fri, Aug 02, 2019 at 07:52:39PM +0200, Tomas Vondra wrote: > On Fri, Aug 02, 2019 at 10:12:58AM -0700, Andres Freund wrote: >> Why would they be stuck continuing to *compress* with pglz? As we >> fully retoast on write anyway we can just gradually switch over to the >> better algorithm. Decompression speed is another story, of course. > > Hmmm, I don't remember the details of those patches so I didn't realize > it allows incremental recompression. If that's possible, that would mean > existing systems can start using it. Which is good.
It may become a problem on some platforms though (Windows?), so patches to improve either the compression or decompression of pglz are not that much crazy as they are still likely going to be used, and for read-mostly switching to a new algo may not be worth the extra cost so it is not like we are going to drop it completely either. My take, still the same as upthread, is that it mostly depends on the amount of complexity each patch introduces compared to the performance gain. > Another question is whether we'd actually want to include the code in > core directly, or use system libraries (and if some packagers might > decide to disable that, for whatever reason). Linking to system libraries would make our maintenance much easier, and when it comes to have a copy of something else in the tree we would be stuck with more maintenance around it. These tend to rot easily. After that comes the case where the compression algo is not in the binary across one server to another, in which case we have an automatic ERROR in case of a mismatching algo, or FATAL for deompression of FPWs at recovery when wal_compression is used. -- Michael
signature.asc
Description: PGP signature