Hi > As a software developer, I definitely want to > implement compression and > save a few gigabytes. However, given my previous experience using > Postgres in real-world applications, reliability at the cost of several > gigabytes would not have caused me any trouble. Just saying. Agree +1, If this had been done twenty years ago, the cost might have been unacceptable. But with today’s hardware—especially disk random and sequential I/O performance improving by hundreds of thousands of times, and memory capacity increasing by several hundred times—it’s almost unimaginable that we now have single 256-GB DIMMs. So this kind of overhead is negligible for modern hardware.
Thanks On Wed, 3 Dec 2025 at 17:54, Maxim Orlov <[email protected]> wrote: > The biggest problem with compression, in my opinion, is that losing > even one byte causes the loss of the entire compressed block in the > worst case scenario. After all, we still don't have checksums for the > SLRU's, which is a shame by itself. > > Again, I'm not against the idea of compression, but the risks need to > be considered. > > As a software developer, I definitely want to implement compression and > save a few gigabytes. However, given my previous experience using > Postgres in real-world applications, reliability at the cost of several > gigabytes would not have caused me any trouble. Just saying. > > -- > Best regards, > Maxim Orlov. >
