As I alluded to in another post, we just moved from 2-4 nodes. Since then,
the cluster has been incredibly

The memory problems I've posted about before have gotten much worse and our
nodes are becoming incredibly slow/unusable every 24 hours or so. Basically,
the JVM reports that only 14GB is committed, but the RSS of the process is
22GB, and cassandra is completely unresponsive, but still having requests
routed to it internally, so it completely destroys performance.

I'm at a loss for how to diagnose this issue.

In addition to that, read performance has gone way downhill, and query
latency is much slower than it was with a 2 node cluster. Perhaps this was
to be expected, though.

We really like cassandra for the most part, but these stability issues are
going to force us to abandon it. Our application is like a yoyo right now,
and we can't live with that.

Help resolving these issues would be greatly appreciated.

Reply via email to