> 13 мая 2019 г., в 12:14, Michael Paquier <mich...@paquier.xyz> написал(а):
> 
> Decompression can matter a lot for mostly-read workloads and
> compression can become a bottleneck for heavy-insert loads, so
> improving compression or decompression should be two separate
> problems, not two problems linked.  Any improvement in one or the
> other, or even both, is nice to have.

Here's patch hacked by Vladimir for compression.

Key differences (as far as I see, maybe Vladimir will post more complete list 
of optimizations):
1. Use functions instead of macro-functions: not surprisingly it's easier to 
optimize them and provide less constraints for compiler to optimize.
2. More compact hash table: use indexes instead of pointers.
3. More robust segment comparison: like memcmp, but return index of first 
different byte

In weighted mix of different data (same as for compression), overall speedup is 
x1.43 on my machine.

Current implementation is integrated into test_pglz suit for benchmarking 
purposes[0].

Best regards, Andrey Borodin.

[0] https://github.com/x4m/test_pglz

Attachment: 0001-Reorganize-pglz-compression-code.patch
Description: Binary data

Reply via email to