Hi there, Please don't be worried about under utilization of CPU/RAM as it will increase with active usage of the data in due course.
However, as pointed out already by other members , you may want to relook at the hardware itself wherein 5TB per node storage with higher CPU+RAM can rather be reconfigured into a larger number of lower capacity nodes ( less CPU+RAM, Storage). It will give you more distribution resulting in faster query response and faster recovery from node failure. It will be a massive exercise to reorganize but the benefits are surely more than the effort put in. Moving to another platform/solution just for this reason isn't perhaps the best option as you will lose the benefits of cassandra in that platform and will have to remodel your data from scratch again. regards Dev On Mon, Nov 15, 2021 at 11:30 AM onmstester onmstester <onmstes...@zoho.com> wrote: > Hi, > In our Cassandra cluster, because of big rows in input data/data model > with TTL of several months, we ended up using almost 80% of storage (5TB > per node), but having less than 20% of CPU usage which almost all of it > would be writing rows to memtables and compacting sstables, so a lot of CPU > capacity wasted. > I wonder if there is anything we can do to solve this problem using > Cassandra or should migrate from Cassandra to something that separates > storage and processing (currently i'm not aware of anything as satble as > cassandra)? > > Sent using Zoho Mail <https://www.zoho.com/mail/> > > >