pretty clear evidence of a memory leak, tombstone problem (still memory),
etc.
If this is Apache, then you may need to do some heap dumps and see what is
going on (if it is java heap that is OOM'ing, which I suspect. Might want
to do some periodic vmstat or equivalent (brute force might be screen
We had what sounds like a similar problem with a DSE cluster a little while
ago, It was not being used, and had no tables in it. The memory kept rising
until it was killed by the oom-killer.
We spent along time trying to get to the bottom of the problem, but it suddenly
stopped when the develop
Thank you. But I have added any tables yet. It’s empty...
Sent from Yahoo Mail for iPhone
On Tuesday, October 22, 2019, 1:15 AM, Matthias Pfau
wrote:
Did you check nodetool status and logs? If so, what is reported?
Regarding that more and more memory is used. This might be a problem with yo
Did you check nodetool status and logs? If so, what is reported?
Regarding that more and more memory is used. This might be a problem with your
table design. I would start analyzing nodetool tablestats output. It reports
how much memory (especially off heap) is used by which table.
Best,
Matthi
What are minimum and recommended ram and space requirements to run Cassandra in
AWS?
Every like 24 hours Cassandra stops working. Even though the service is active,
it’s dead and non responsive until I restart the service.
Top shows %MEM slowly creeping upwards. Yesterday it showed 75%.
In the l