3.11.4 is a very old release, with lots of known bugs. It's possible the
memory is related to that.

If you bounce one of the old nodes, where does the memory end up?


On Thu, Jan 6, 2022 at 3:44 PM Eunsu Kim <eunsu.bil...@gmail.com> wrote:

>
> Looking at the memory usage chart, it seems that the physical memory usage
> of the existing node has increased since the new node was added with
> auto_bootstrap=false.
>
>
>
>
> On Fri, Jan 7, 2022 at 1:11 AM Eunsu Kim <eunsu.bil...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a Cassandra cluster(3.11.4) that does heavy writing work. (14k~16k
>> write throughput per second per node)
>>
>> Nodes are physical machine in data center. Number of nodes are 30. Each
>> node has three data disks mounted.
>>
>>
>> A few days ago, a QueryTimeout problem occurred due to Full GC.
>> So, referring to this blog(
>> https://thelastpickle.com/blog/2018/04/11/gc-tuning.html), it seemed to
>> have been solved by changing the memtable_allocation_type to
>> offheap_objects.
>>
>> But today, I got an alarm saying that some nodes are using more than 90%
>> of physical memory. (115GiB /125GiB)
>>
>> Native memory usage of some nodes is gradually increasing.
>>
>>
>>
>> All tables use TWCS, and TTL is 2 weeks.
>>
>> Below is the applied jvm option.
>>
>> -Xms31g
>> -Xmx31g
>> -XX:+UseG1GC
>> -XX:G1RSetUpdatingPauseTimePercent=5
>> -XX:MaxGCPauseMillis=500
>> -XX:InitiatingHeapOccupancyPercent=70
>> -XX:ParallelGCThreads=24
>> -XX:ConcGCThreads=24
>> …
>>
>>
>> What additional things can I try?
>>
>> I am looking forward to the advice of experts.
>>
>> Regards.
>>
>
>

Reply via email to