Hi vignesh,
Correlation does not imply causation. I wouldn't work on the assumption
that the memory usage spikes are caused by compactions to start with.
It's best to prove the causal effect first. There's multiple ways to do
this, I'm just throwing in some ideas:
1. taking a heap dump whil
Can you explain a bit more what you mean by memory spikes?
The defaults we ship use the same settings for min and max JVM heap size,
so you should see all the memory allocated to the JVM at startup. Did you
change anything here? I don't recommend doing so.
If you're referring to files in the pa
*Setup:*
I have a Cassandra cluster running in 3 datacenters with 3 nodes each
(total 9 nodes), hosted on GCP.
• *Replication Factor:* 3-3-3
• *Compaction Strategy:* LeveledCompactionStrategy
• *Heap Memory:* 10 GB (Total allocated memory: 32 GB)
• *Off-heap Memory:* around 4 GB
• *Workload:* ~1.5K
It may help with some, but it's compaction and memtable flushes that
generate the most IO. You also run the risk of not having data fully
committed to disk if something bad were to happen. It might be ok for
time series data that you can afford to lose, but not for mission
critical things.
On Fri,
Thanks Bowen and Jon for the clarification and suggestions! I will go
through them and dig more.
Yes, the JVM heap size is fixed and I can see it is allocated at all times.
The spikes I am referring to happen in addition to heap allocated memory.
I had tuned heap settings to resolve GC pause issue