Right now the biggest SST which I have is 210GB on a 3 TB disk, total disk
consumed is around 50% on all nodes, I am using SCTS. Read and Write query
latency is under 15ms. Full repair time is long but am sure when I switch
to incremental repairs this would be taken care of. I am hitting the 50%
di
The four criteria I would suggest for evaluating node size:
1. Query latency.
2. Query throughput/load
3. Repair time - worst case, full repair, what you can least afford if it
happens at the worst time
4. Expected growth over the next six to 18 months - you don't what to be
scrambling with latenc
>
> Would adding nodes be the right way to start if I want to get the data per
> node down
Yes, if everything else is fine, the last and always available option to
reduce the disk size per node is to add new nodes. Sometimes it is the
first option considered as it is relatively quick and quite st
Thanks for the response Alain. I am using STCS and would like to take some
action as we would be hitting 50% disk space pretty soon. Would adding nodes be
the right way to start if I want to get the data per node down otherwise can
you or someone on the list please suggest the right way to go ab
Hi,
I seek advice in data size per node. Each of my node has close to 1 TB of
> data. I am not seeing any issues as of now but wanted to run it by you guys
> if this data size is pushing the limits in any manner and if I should be
> working on reducing data size per node.
There is no real limit
Hi all,
I am running a 9 node C* 2.1.12 cluster. I seek advice in data size per
node. Each of my node has close to 1 TB of data. I am not seeing any issues
as of now but wanted to run it by you guys if this data size is pushing the
limits in any manner and if I should be working on reducing data si