Aaron,
Have reduced cache sizes and been monitoring for the past week. It appears as
if this was the culprit - since the changes have not seen a resurfacing.
For those keeping score at home.
* Had sudden persistent spikes in CPU from the Cassandra java process
* Occurred every 24-48 hours an
Caches might be it. Will try reducing and see how it goes.
Didn't mention this because I wasn't seeing the same errors yesterday, but last
night thought it might be worth mentioning.
Had changed the location of some data last week, a few days subsequent to that
I received some IOException er
That's once in few days, so I don't think it's too important. Especially
since 0.77 is much better than 0.99 I've seen sometimes :)
26.01.12 02:49, aaron morton написав(ла):
You are running into GC issues.
WARN [ScheduledTasks:1] 2012-01-22 12:53:42,804 GCInspector.java
(line 146) Heap is 0.7
You are running into GC issues.
>> WARN [ScheduledTasks:1] 2012-01-22 12:53:42,804 GCInspector.java (line 146)
>> Heap is 0.7767292149986439 full. You may need to reduce memtable and/or
>> cache sizes. Cassandra will now flush up to the two largest memtables to
>> free up memory. Adjust flu
According to the log, I don't see much time spent for GC. You can still
check it with jstat or uncomment GC logging in cassandra-env.sh. Are you
sure you've identified the thread correctly?
It's still possible that you have memory spike where GCInspector simply
has no chance to run between Full
Here is a snippet of what I'm getting out of system.log for GC. Anything is
there provide a clue?
WARN [ScheduledTasks:1] 2012-01-22 12:53:42,804 GCInspector.java (line 146)
Heap is 0.7767292149986439 full. You may need to reduce memtable and/or cache
sizes. Cassandra will now flush up to t
Hello.
What's in the logs? It should output something like "Hey, you've got
most of your memory used. I am going to flush some of memtables". Sorry,
I don't remember exact spelling, but it's gong from GC, so it should be
greppable by "GC".
25.01.12 16:26, Matthew Trinneer написав(ла):
Hello
Hello Community,
Am troubleshooting an issue with sudden and sustained high CPU on nodes in a 3
node cluster. This takes place when there is minimal load on the servers, and
continues indefinitely until I stop and restart a node. All nodes (3) seem to
be effected by the same issue, however it