Hello,

Could you please share you experience on pushing through a major compaction on 
a CF with a large number of sstables? I get an OOM even after dropping CFs that 
I can drop and increasing JVM heap to the limit. My caches are minimal and 
memtables are empty. This only happens on a single node.

Caused by: java.lang.OutOfMemoryError: Java heap space
        at 
org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:123)
        at 
org.apache.cassandra.io.sstable.SSTableScanner.<init>(SSTableScanner.java:57)
        at 
org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:660)
        at 
org.apache.cassandra.db.compaction.CompactionIterator.getCollatingIterator(CompactionIterator.java:92)
        at 
org.apache.cassandra.db.compaction.CompactionIterator.<init>(CompactionIterator.java:68)
        at 
org.apache.cassandra.db.compaction.CompactionManager.doCompactionWithoutSizeEstimation(CompactionManager.java:552)
        at 
org.apache.cassandra.db.compaction.CompactionManager.doCompaction(CompactionManager.java:506)
        at 
org.apache.cassandra.db.compaction.CompactionManager$4.call(CompactionManager.java:319)
        at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
        at java.util.concurrent.FutureTask.run(Unknown Source

What other options do I have, e.g. reloading this CF or just the segment from 
this node? Ideally I would like to avoid custom export/import scripting. 

Thank you very much,
Oleg

Reply via email to