Another option to consider is changing your SSTable compression. The default is 
LZ4, which is fast for reads and writes but compressed files have a larger disk 
footprint. A better alternative might be Zstd, which optimizes for disk 
footprint. Here’s the full documentation: 
https://cassandra.apache.org/doc/latest/cassandra/operating/compression.html

This won’t totally alleviate your problem, but it would help, and it might be 
useful for getting out of a tight situation in the future.

—
Abe

> On Nov 15, 2021, at 10:41 PM, onmstester onmstester <onmstes...@zoho.com> 
> wrote:
> 
> Thank You
> 
> Sent using Zoho Mail <https://www.zoho.com/mail/>
> 
> 
> ---- On Tue, 16 Nov 2021 10:00:19 +0330 <a...@aber.io> wrote ----
> 
> > I can, but i thought with 5TB per node already violated best practices (1-2 
> > TB per node) and won't be a good idea to 2X or 3X that?
> 
> The main downside of larger disks is that it takes longer to replace a host 
> that goes down, since there’s less network capacity to move data from 
> surviving instances to the new, replacement instances. The longer it takes to 
> replace a host, the longer the time window when further failure may cause 
> unavailability (for example: if you’re running in a 3-instance cluster, one 
> node goes down and requires replacement, any additional nodes going down will 
> cause downtime for reads that require a quorum).
> 
> These are some of the main factors to consider here. You can always bump the 
> disk capacity for one instance, measure replacement times, then decide 
> whether to increase disk capacity across the cluster.
> 
> 

Reply via email to