Hi

I am using Apace Cassandra version :

[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]


I am running a 5 node cluster and recently added one node to the cluster.
Cluster is running with G1 GC garbage collector with 16GB -Xmx.
Cluster is having one materialised view also;

On the newly added node I got OutOfMemory Error.

Heap Dump analysis shows below error:

BatchlogTasks:1
  at java.lang.OutOfMemoryError.<init>()V (OutOfMemoryError.java:48)
  at java.util.HashMap.resize()[Ljava/util/HashMap$Node; (HashMap.java:704)
  at 
java.util.HashMap.putVal(ILjava/lang/Object;Ljava/lang/Object;ZZ)Ljava/lang/Object;
(HashMap.java:663)
  at 
java.util.HashMap.put(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
(HashMap.java:612)
  at java.util.HashSet.add(Ljava/lang/Object;)Z (HashSet.java:220)
  at 
org.apache.cassandra.batchlog.BatchlogManager.finishAndClearBatches(Ljava/util/ArrayList;Ljava/util/Set;Ljava/util/Set;)V
(BatchlogManager.java:281)
  at 
org.apache.cassandra.batchlog.BatchlogManager.processBatchlogEntries(Lorg/apache/cassandra/cql3/UntypedResultSet;ILcom/google/common/util/concurrent/RateLimiter;)V
(BatchlogManager.java:261)
  at org.apache.cassandra.batchlog.BatchlogManager.replayFailedBatches()V
(BatchlogManager.java:210)
  at org.apache.cassandra.batchlog.BatchlogManager$$Lambda$269.run()V
(Unknown Source)
  at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run()V
(DebuggableScheduledThreadPoolExecutor.java:118)
  at java.util.concurrent.Executors$RunnableAdapter.call()Ljava/lang/Object;
(Executors.java:511)
  at java.util.concurrent.FutureTask.runAndReset()Z (FutureTask.java:308)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Ljava/util/concurrent/ScheduledThreadPoolExecutor$ScheduledFutureTask;)Z
(ScheduledThreadPoolExecutor.java:180)
  at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run()V
(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V
(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run()V
(ThreadPoolExecutor.java:624)
  at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(Ljava/lang/Runnable;)V
(NamedThreadFactory.java:81)
  at org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$4.run()V
(Unknown Source)
  at java.lang.Thread.run()V (Thread.java:748)


I have found system.bacthes file have huge data on this node.

nodetool -u cassandra -pw cassandra tablestats system.batches -H
Total number of tables: 65
----------------
Keyspace : system
        Read Count: 3990928
        Read Latency: 0.07400208372589032 ms
        Write Count: 4898771
        Write Latency: 0.012194797838069997 ms
        Pending Flushes: 0
                Table: batches
                SSTable count: 5
                Space used (live): 50.89 GiB
                Space used (total): 50.89 GiB
                Space used by snapshots (total): 0 bytes
                Off heap memory used (total): 1.05 GiB
                SSTable Compression Ratio: 0.38778672943000886
                Number of partitions (estimate): 727971046
                Memtable cell count: 12
                Memtable data size: 918 bytes
                Memtable off heap memory used: 0 bytes
                Memtable switch count: 10
                Local read count: 0
                Local read latency: NaN ms
                Local write count: 618894
                Local write latency: 0.010 ms
                Pending flushes: 0
                Percent repaired: 0.0
                Bloom filter false positives: 0
                Bloom filter false ratio: 0.00000
                Bloom filter space used: 906.25 MiB
                Bloom filter off heap memory used: 906.25 MiB
                Index summary off heap memory used: 155.86 MiB
                Compression metadata off heap memory used: 10.6 MiB
                Compacted partition minimum bytes: 30
                Compacted partition maximum bytes: 258
                Compacted partition mean bytes: 136
                Average live cells per slice (last five minutes): 149.0
                Maximum live cells per slice (last five minutes): 149
                Average tombstones per slice (last five minutes): 1.0
                Maximum tombstones per slice (last five minutes): 1
                Dropped Mutations: 0 bytes


*Can someone please help, what can be the issue ?*

-- 
Raman Gugnani

Reply via email to