Hey guys,

I am getting an out of memory (mmap failed) error with Cassandra
1.0.2. The relevant log lines are pasted at
http://pastebin.com/UM28ZC1g.

Cassandra works fine until it reaches about 300-400GB of load (on one
instance, I have 12 nodes RF=2). Then nodes start failing with such
errors. The nodes are pretty beefy, 32GB of ram, 8 cores. Increasing
the JVM heap size does not help.

I am running on a 64bit jvm. I am using jna. I have memlock unlimited
for the user. (I confirmed this by looking at /proc/<pid>/limits).

I also tried restarting the process as root, but it crashes with the same error.

Also the number of files that I have in the data directory is about
~300, so it should not be exceeding the open files limit.

I don't know if this is relevant. I just have two column families,
counter_object and counter_time. I am using very wide columns, so row
sizes can be huge. You can see from the log link, that the *.db files
are sometimes pretty big.

Please help! Thank you!

-- 
Regards,
Ajeet

Reply via email to