I have a situation where the off-heap memory is bloating the jvm process
memory, making it a candidate to be killed by the oom_killer.
My server has 256 GB RAM and Cassandra heap memory of 16 GB
Below is the output of "nodetool info" and nodetool compactionstats for a
culprit table which causes
If possible, I would suggest running that command on a periodic basis (cron or
whatever).
Also, you can run it on a single server and iterate through all the nodes in
the cluster/DC.
Would also recommend running "nodetool compactionstats
And looked at your concern about high value for hinted han
Sent from my iPhone
> On Mar 3, 2017, at 12:18 PM, Shravan Ch wrote:
>
> Hello,
>
> More than 30 plus Cassandra servers in the primary DC went down OOM exception
> below. What puzzles me is the scale at which it happened (at the same
> minute). I will share some more details below.
>
> Sy
I was looking at nodetool info across all nodes. Consistently JVM heap used is
~ 12GB and off heap is ~ 4-5GB.
From: Thakrar, Jayesh
Sent: Saturday, March 4, 2017 9:23:01 AM
To: Shravan C; Joaquin Casares; user@cassandra.apache.org
Subject: Re: OOM on Apache Cass
On Saturday, March 4, 2017, Thakrar, Jayesh
wrote:
> LCS does not rule out frequent updates - it just says that there will be
> more frequent compaction, which can potentially increase compaction
> activity (which again can be throttled as needed).
>
> But STCS will guarantee OOM when you have la
LCS does not rule out frequent updates - it just says that there will be more
frequent compaction, which can potentially increase compaction activity (which
again can be throttled as needed).
But STCS will guarantee OOM when you have large datasets.
Did you have a look at the offheap + onheap siz