> My rows consist of only 60 columns and these 60 columns looks like this: > ColumnName: Sensor59 -- Value: 434.2647915698039 -- TTL: 10800
The hotspot error log indicates the OOM is actually the result of a *stack* overflow rather than a heap overflow. While the first OOM in system.log indicates it's compaction, the stack frame claimed to be the culprit in the hotspot error log is different and actually occurs in the read path. I'm pretty surprised since I don't believe there is anything extreme going on in terms of Java level stack depth, and it seems unexpected to me that any of the native I/O code would be doing unbounded stack allocation. VM args don't contain custom -Xss according to the hotspot error log. I would say "try a newer JVM", except you seem to be on the latest 1.6 update. In a recent openjdk7, RandomAccessFile.readBytes() ends up in readBytes() in share/native/java/io/io_util.c which only uses stack allocations for reads <= 8192 bytes. I didn't check earlier JDK:s but it seems highly unlikely that such a core feature would do unbounded stack allocation and have it go unnoticed. Neither does it sound likely that the default stack size on Windows is so small as to make this an expected outcome given the stack depth in Cassandra. I wonder if there is memory corruption going on that causes the overflow. Or am I missing something simple? -- / Peter Schuller