I'm running a 4 node cluster with RF=3, CL of QUORUM for writes and ONE for 
reads. Each node has 3.7GB RAM with 32GB SSD HD, commitlog is on 
another HD. Currently each node has about 12GB of data. Cluster is always 
normal unless repair happens, that's when some nodes go to medium health in 
terms of OpsCenter.

MAX_HEAP_SIZE="2G"
HEAP_NEWSIZE="400M"

I've looked everywhere to get info on what might be causing these errors but 
no luck. Can anyone please guide me to what I should look at or tweak to get 
around these errors?

All column families are using SizeTieredCompactionStrategy, I've thought 
about moving to LeveledCompactionStrategy since Cassandra is running on 
SSD but haven't made the move yet.

All writes are write once, data is rarely updated and no TTL columns. I do use 
wide rows that can span few thousands to a few million, I'm not sure if range 
slices happen using the memory.

Let me know if further info is needed. I do have hproc files but those are 
about 
3.2 GB in size.

java.lang.OutOfMemoryError: Java heap space
org.apache.cassandra.io.util.RandomAccessReader.<init>
org.apache.cassandra.io.util.RandomAccessReader.open
org.apache.cassandra.io.sstable.SSTableReader
org.apache.cassandra.io.sstable.SSTableScanner
org.apache.cassandra.io.sstable.SSTableReader
org.apache.cassandra.db.RowIteratorFactory.getIterator
org.apache.cassandra.db.ColumnFamilyStore.getSequentialIterator
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice
org.apache.cassandra.db.RangeSliceCommand.executeLocally
StorageProxy$LocalRangeSliceRunnable.runMayThrow
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run
java.util.concurrent.Executors$RunnableAdapter.call
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService
org.apache.cassandra.concurrent.SEPWorker.run
java.lang.Thread.run

Reply via email to