I took this jmap dump of cassandra(in production).  Before I restarted the 
whole production cluster, I had some nodes running compaction and it looked 
like all memory had been consumed(kind of like cassandra is not clearing out 
the caches or memtables fast enough).  I am trying to still debug compaction 
causes slowness on the cluster since all cassandra.yaml files are pretty much 
the defaults with size tiered compaction.

The weird thing is I dump and get a 5.4G heap.bin file and load that into 
ecipse who tells me total is 142.8MB….what???? So low????, top was showing 1.9G 
at the time(and I took this top snapshot later(2 hours after)… (how is eclipse 
profile telling me the jmap showed 142.8MB in use instead of 1.9G in use?)

Tasks: 398 total,   1 running, 397 sleeping,   0 stopped,   0 zombie
Cpu(s):  2.8%us,  0.5%sy,  0.0%ni, 96.5%id,  0.1%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  32854680k total, 31910708k used,   943972k free,    89776k buffers
Swap: 33554424k total,    18288k used, 33536136k free, 23428596k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
20909 cassandr  20   0 64.1g 9.2g 2.1g S 75.7 29.4 182:37.92 java
22455 cassandr  20   0 15288 1340  824 R  3.9  0.0   0:00.02 top

It almost seems like cassandra is not being good about memory management here 
as we slowly get into a situation where compaction is run which takes out our 
memory(configured for 8G).  I can easily go higher than 8G on these systems as 
I have 32gig each node, but there was docs that said 8G is better for GC.  Has 
anyone else taken a jmap dump of cassandra?

Thanks,
Dean

Reply via email to