Hello,

We currently are at C* 1.2 and are using the SnappyCompressor for all our
CFs. Total data size is at 24 TB, and its a 12 node cluster. Avg node size
is 2 TB.

We are adding nodes currently and it seems like compression is falling
behind. I judge that by the fact that the new node which has a 4.5T disk
fills up to 100% while its bootstrapping. Can we avoid this problem with
the LZ4 compressor because of better compression or do we just need a
bigger disk?

The reason why we started with 4.5 TB was because we were assuming that
while a new node is bootstrapping it may not need more than 2 times the avg
data size. Is that a weak assumption?

Ruchir.

Reply via email to