Isn't there a very big (>40GB) sstable in /volumes/cassandra/data/data1? If
there is you could split it or change your data model to prevent such sstables.
Sent using https://www.zoho.com/mail/
Forwarded message
From: Loïc CHANEL via user
To:
Date: Fri, 06 J
Another solution: distribute data in more tables, for example you could create
multiple tables based on value or hash_bucket of one of the columns, by doing
this current data volume and compaction overhead would be divided to the
number of underlying tables. Although there is a limitation for n
Hi team,
Does anyone know how to even the data between several data disks ?
Another approach could be to prevent Cassandra from writing on a 90% full
disk, but is there a way to do that ?
Thanks,
Loïc CHANEL
System Big Data engineer
SoftAtHome (Lyon, France)
Le lun. 19 déc. 2022 à 11:07, Loïc