Re: Compaction task priority

2022-09-06 Thread onmstester onmstester via user
Using nodetool stop -id COMPACTION_UUID(reported in compactionstats), also you could figure it out with nodetool help stop Sent using https://www.zoho.com/mail/ On Mon, 05 Sep 2022 10:18:52 +0430 Gil Ganz wrote --- onmstester  - How can you stop a specific compaction task? stop

Re: Compaction task priority

2022-09-04 Thread Gil Ganz
onmstester - How can you stop a specific compaction task? stop command stops all compactions of a given type (would be nice to be able to stop specific one). Jim - in my case the solution was actually to limit concurrent compactors, not increase it. Too many tasks caused the server to slow down a

Re: Compaction task priority

2022-09-02 Thread Jim Shaw
if capacity allowed, increase compaction_throughput_mb_per_sec as 1st tuning, and if still behind, increase concurrent_compactors as 2nd tuning. Regards, Jim On Fri, Sep 2, 2022 at 3:05 AM onmstester onmstester via user < user@cassandra.apache.org> wrote: > Another thing that comes to my mind

Re: Compaction task priority

2022-09-02 Thread onmstester onmstester via user
Another thing that comes to my mind: increase minimum sstable count to compact from 4 to 32 for the big table that won't be read that much, although you should watch out for too many sstables count. Sent using https://www.zoho.com/mail/ On Fri, 02 Sep 2022 11:29:59 +0430 onmstester

Re: Compaction task priority

2022-09-02 Thread onmstester onmstester via user
I was there too! and found nothing to work around it except stopping big/unnecessary compactions manually (using nodetool stop) whenever they appears by some shell scrips (using crontab) Sent using https://www.zoho.com/mail/ On Fri, 02 Sep 2022 10:59:22 +0430 Gil Ganz wrote ---

Compaction task priority

2022-09-01 Thread Gil Ganz
Hey When deciding which sstables to compact together, how is the priority determined between tasks, and can I do something about it? In some cases (mostly after removing a node), it takes a while for compactions to keep up with the new data the came from removed nodes, and I see it is busy on huge