Agreed about deflate.

Also you can adjust your chunk size, which may help ratios as well,
especially if you expect your data to compress well - often larger chunks
will compress better, but it depends on the nature of your data.

In the near future, look for work from Sushma @ Instagram to make available
ZStandard ( https://jira.apache.org/jira/browse/CASSANDRA-14482 ).

On Thu, Aug 9, 2018 at 8:02 AM, Elliott Sims <elli...@backblaze.com> wrote:

> Deflate instead of LZ4 will probably give you somewhat better compression
> at the cost of a lot of CPU.  Larger chunk length might also help, but in
> most cases you probably won't see much benefit above 64K (and it will
> increase I/O load).
>
> On Wed, Aug 8, 2018 at 11:18 PM, Eunsu Kim <eunsu.bil...@gmail.com> wrote:
>
>> Hi all.
>>
>> I’m worried about the amount of disk I use, so I’m more curious about
>> compression. We are currently using 3.11.0 and use default LZ4 Compressor
>> ('chunk_length_in_kb': 64).
>> Is there a setting that can make more powerful compression?
>> Because most of them are time series data with TTL, we use
>> TimeWindowCompositionStrategy.
>>
>> Thank you in advance.
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>

Reply via email to