if capacity allowed,  increase compaction_throughput_mb_per_sec as 1st
tuning,  and if still behind, increase concurrent_compactors as 2nd tuning.

Regards,

Jim

On Fri, Sep 2, 2022 at 3:05 AM onmstester onmstester via user <
user@cassandra.apache.org> wrote:

> Another thing that comes to my mind: increase minimum sstable count to
> compact from 4 to 32 for the big table that won't be read that much,
> although you should watch out for too many sstables count.
>
> Sent using Zoho Mail <https://www.zoho.com/mail/>
>
>
>
> ---- On Fri, 02 Sep 2022 11:29:59 +0430 *onmstester onmstester via user
> <user@cassandra.apache.org <user@cassandra.apache.org>>* wrote ---
>
> I was there too! and found nothing to work around it except stopping
> big/unnecessary compactions manually (using nodetool stop) whenever they
> appears by some shell scrips (using crontab)
>
> Sent using Zoho Mail <https://www.zoho.com/mail/>
>
>
>
> ---- On Fri, 02 Sep 2022 10:59:22 +0430 *Gil Ganz <gilg...@gmail.com
> <gilg...@gmail.com>>* wrote ---
>
>
>
>
> Hey
> When deciding which sstables to compact together, how is the priority
> determined between tasks, and can I do something about it?
>
> In some cases (mostly after removing a node), it takes a while for
> compactions to keep up with the new data the came from removed nodes, and I
> see it is busy on huge compaction tasks, but in the meantime a lot of small
> sstables are piling up (new data that is coming from the application, so
> read performance is not good, new data is scattered in many sstables, and
> probably combining big sstables won't help reduce fragmentation as much (I
> think).
>
> Another thing that comes to mind, is perhaps I have a table that is very
> big, but not being read that much, would be nice to have other tables have
> higher compaction priority (to help in a case like I described above).
>
> Version is 4.0.4
>
> Gil
>
>
>
>

Reply via email to