Hi all,

I got OutOfMemoryError in startup process as below.
I have 3 questions about the error.

1. Why did Cassandra built by myself cause OutOfMemory errors?
OutOfMemory errors happened in startup process  in some (not all) nodes on
Cassandra 2.2.8 which I got from github and built by myself.
However, the error didn't happen on Cassandra 2.2.8 which was installed by
yum.
Is there any difference between Cassandra of github and yum?

2. Why can Cassandra continue the startup process when OutOfMemory error
happens in initializing system.hints?
Is that because the failure of loading the summary index isn't fatal?

3. Does this error cause consistency problem like rolling data back?
In my test, some updating were lost after the error happened (= the stale
data were read).

My cluster has 3 nodes. A node is AWS EC2 m4.large(2core, 8GB memory) with
Amazon Linux.
I tried a test which requested a lot of updates while each node of
Cassandra was killed and restarted.

== logs ==
INFO  [main] 2016-12-10 09:46:50,204 ColumnFamilyStore.java:389 -
Initializing system.hints
ERROR [SSTableBatchOpen:1] 2016-12-10 09:46:50,359
DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
java.lang.OutOfMemoryError: Java heap space
        at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.de
serializeBounds(MmappedSegmentedFile.java:411) ~[main/:na]
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.loadSummary(SSTableReader.java:850)
~[main/:na]
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:700)
~[main/:na]
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:672)
~[main/:na]
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:466)
~[main/:na]
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:371)
~[main/:na]
        at 
org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:509)
~[main/:na]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_91]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[na:1.8.0_91]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_91]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_91]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
====

== JVM arguments ==
INFO  [main] 2016-12-10 09:42:53,798 CassandraDaemon.java:417 - JVM
Arguments: [-ea, -javaagent:/home/ec2-user/cass
andra/bin/../lib/jamm-0.3.0.jar, -XX:+CMSClassUnloadingEnabled,
-XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, -Xms1996M,
-Xmx1996M, -Xmn200M, -XX:+HeapDumpOnOutOfMemoryError, -Xss256k,
-XX:StringTableSize=1000003, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC,
-XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8,
-XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75,
-XX:+UseCMSInitiatingOccupancyOnly, -XX:+UseTLAB,
-XX:+PerfDisableSharedMem, -XX:CompileCommandFile=/home/e
c2-user/cassandra/bin/../conf/hotspot_compiler, -XX:CMSWaitDuration=10000,
-XX:+CMSParallelInitialMarkEnabled, -XX:+CMSEdenChunksRecordAlways,
-XX:CMSWaitDuration=10000, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps,
-XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution,
-XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure,
-Xloggc:/home/ec2-user/cassandra/bin/../logs/gc.log,
-XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10,
-XX:GCLogFileSize=10M, -Djava.net.preferIPv4Stack=true,
-Dcassandra.jmx.local.port=7199, -XX:+DisableExplicitGC,
-Djava.library.path=/home/ec2-user/cassandra/bin/../lib/sigar-bin,
-Dlogback.configurationFile=logback.xml,
-Dcassandra.logdir=/home/ec2-user/cassandra/bin/../logs,
-Dcassandra.storagedir=/home/ec2-user/cassandra/bin/../data]
====

thanks,
Yuji

Reply via email to