How can I restart?
It blocks with the error listed below.
Are my memory settings good for my configuration?

On 14 Jan 2016, at 18:30, Jake Luciani 
<jak...@gmail.com<mailto:jak...@gmail.com>> wrote:

Yes you can restart without data loss.

Can you please include info about how much data you have loaded per node and 
perhaps what your schema looks like?

Thanks

On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay 
<jean.tremb...@zen-innovations.com<mailto:jean.tremb...@zen-innovations.com>> 
wrote:

Ok, I will open a ticket.

How could I restart my cluster without loosing everything ?
Would there be a better memory configuration to select for my nodes? Currently 
I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.

Thanks

Jean

On 14 Jan 2016, at 18:19, Tyler Hobbs 
<ty...@datastax.com<mailto:ty...@datastax.com>> wrote:

I don't think that's a known issue.  Can you open a ticket at 
https://issues.apache.org/jira/browse/CASSANDRA and attach your schema along 
with the commitlog files and the mutation that was saved to /tmp?

On Thu, Jan 14, 2016 at 10:56 AM, Jean Tremblay 
<jean.tremb...@zen-innovations.com<mailto:jean.tremb...@zen-innovations.com>> 
wrote:
Hi,

I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
I use Cassandra 3.1.1.
I use the following setup for the memory:
  MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="496M"

I have been loading a lot of data in this cluster over the last 24 hours. The 
system behaved I think very nicely. It was loading very fast, and giving 
excellent read time. There was no error messages until this one:


ERROR [SharedPool-Worker-35] 2016-01-14 17:05:23,602 
JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_65]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_65]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) 
~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.1.1.jar:3.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_65]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.1.1.jar:3.1.1]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]

4 nodes out of 5 crashed with this error message. Now when I want to restart 
the first node I have the following error;

ERROR [main] 2016-01-14 17:15:59,617 JVMStabilityInspector.java:81 - Exiting 
due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
Unexpected error deserializing mutation; saved to 
/tmp/mutation7465380878750576105dat.  This may be caused by replaying a 
mutation against a table with the same name but incompatible schema.  Exception 
follows: org.apache.cassandra.serializers.MarshalException: Not enough bytes to 
read a map
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:633)
 [apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:556)
 [apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:509)
 [apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:404)
 [apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:151)
 [apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) 
[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) 
[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) 
[apache-cassandra-3.1.1.jar:3.1.1]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:549) 
[apache-cassandra-3.1.1.jar:3.1.1]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:677) 
[apache-cassandra-3.1.1.jar:3.1.1]

I can no longer start my nodes.

How can I restart my cluster?
Is this problem known?
Is there a better Cassandra 3 version which would behave better with respect to 
this problem?
Would there be a better memory configuration to select for my nodes? Currently 
I use MAX_HEAP_SIZE="6G" HEAP_NEWSIZE=“496M” for a 16M RAM node.


Thank you very much for your advice.

Kind regards

Jean



--
Tyler Hobbs
DataStax<http://datastax.com/>



--
http://twitter.com/tjake

Reply via email to