Isn't there a very big (>40GB) sstable in /volumes/cassandra/data/data1? If
there is you could split it or change your data model to prevent such sstables.
Sent using https://www.zoho.com/mail/
Forwarded message
From: Loïc CHANEL via user
To:
Date: Fri, 06 J
Another solution: distribute data in more tables, for example you could create
multiple tables based on value or hash_bucket of one of the columns, by doing
this current data volume and compaction overhead would be divided to the
number of underlying tables. Although there is a limitation for n
I encountered the same problem again with same error logs(this time with Apache
Cassandra 4.0.6 and a new cluster), but unlike the previous time, hostname
config was fine. After days of try and fail, finally i've found the root cause:
time in faulty server has a 2 minute difference and not in sy
I patched this on 3.11.2 easily:
1. build jar file from src and put in cassandra/lib directory
2. restart cassandra service
3. alter table using compression zstd and rebuild sstables
But it was in a time when 4.0 was not available yet and after that i upgraded
to 4.0 immidiately.
Sent usi
PM Jim Shaw <mailto:jxys...@gmail.com> wrote:
if capacity allowed, increase compaction_throughput_mb_per_sec as 1st tuning,
and if still behind, increase concurrent_compactors as 2nd tuning.
Regards,
Jim
On Fri, Sep 2, 2022 at 3:05 AM onmstester onmstester via user
<ma
+0430 onmstester onmstester via user
wrote ---
I was there too! and found nothing to work around it except stopping
big/unnecessary compactions manually (using nodetool stop) whenever they
appears by some shell scrips (using crontab)
Sent using https://www.zoho.com/mail/
On Fri, 02 Sep
I was there too! and found nothing to work around it except stopping
big/unnecessary compactions manually (using nodetool stop) whenever they
appears by some shell scrips (using crontab)
Sent using https://www.zoho.com/mail/
On Fri, 02 Sep 2022 10:59:22 +0430 Gil Ganz wrote ---