On Thu, 11 Feb 2021, 21:09 Daniil Zakhlystov, <usernam...@yandex-team.ru>
wrote::

>
> 3. Chunked compression allows to compress only well compressible messages
> and save the CPU cycles by not compressing the others
> 4. Chunked compression introduces some traffic overhead compared to the
> permanent (1.2810G vs 1.2761G TX data on pg_restore of IMDB database dump,
> according to results in my previous message)
> 5. From the protocol point of view, chunked compression seems a little bit
> more flexible:
>  - we can inject some uncompressed messages at any time without the need
> to decompress/compress the compressed data
>  - we can potentially switch the compression algorithm at any time (but I
> think that this might be over-engineering)
>

Chunked compression also potentially makes it easier to handle non blocking
sockets, because you aren't worrying about yet another layer of buffering
within the compression stream. This is a real pain with SSL, for example.

It simplifies protocol analysis.

It permits compression to be decided on the fly, heuristically, based on
message size and potential compressibility.

It could relatively easily be extended to compress a group of pending small
messages, e.g. by PQflush. That'd help mitigate the downsides with small
messages.

So while stream compression offers better compression ratios, I'm inclined
to suspect we'll want message level compression.

>

Reply via email to