Hello All,

We are facing an issue where few of the nodes are not able to complete
compactions.
We tried restarting, scrubbing and even rebuilding an entire node but
nothing seems to work so far.

It's a 10 Region installation with close to 150 nodes.

Datatax support
<https://support.datastax.com/s/article/ERROR-Failure-serializing-partition-key>
suggested rebuilding the node but that did not help. Any help is
appreciated.

Following is the logtrace.

ERROR [CompactionExecutor:50] 2023-01-14 05:12:20,795
CassandraDaemon.java:581 - Exception in thread
Thread[CompactionExecutor:50,1,main]
org.apache.cassandra.db.rows.PartitionSerializationException: Failed to
serialize partition key '<key>'  on table '<table>' in keyspace
'<keyspace>'.
at
org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:240)
at
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
at
org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:84)
at
org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:137)
at
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:193)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:77)
at
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:100)
at
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:298)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.BufferOverflowException: null
at
org.apache.cassandra.io.util.DataOutputBuffer.validateReallocation(DataOutputBuffer.java:136)
at
org.apache.cassandra.io.util.DataOutputBuffer.calculateNewSize(DataOutputBuffer.java:154)
at
org.apache.cassandra.io.util.DataOutputBuffer.expandToFit(DataOutputBuffer.java:161)
at
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:121)
at
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:121)
at
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:93)
at
org.apache.cassandra.db.marshal.ByteArrayAccessor.write(ByteArrayAccessor.java:61)
at
org.apache.cassandra.db.marshal.ByteArrayAccessor.write(ByteArrayAccessor.java:38)
at
org.apache.cassandra.db.marshal.ValueAccessor.writeWithVIntLength(ValueAccessor.java:164)
at
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:451)
at
org.apache.cassandra.db.ClusteringPrefix$Serializer.serializeValuesWithoutSize(ClusteringPrefix.java:397)
at
org.apache.cassandra.db.Clustering$Serializer.serialize(Clustering.java:132)
at
org.apache.cassandra.db.ClusteringPrefix$Serializer.serialize(ClusteringPrefix.java:339)
at
org.apache.cassandra.io.sstable.IndexInfo$Serializer.serialize(IndexInfo.java:110)
at
org.apache.cassandra.io.sstable.IndexInfo$Serializer.serialize(IndexInfo.java:91)
at org.apache.cassandra.db.ColumnIndex.addIndexBlock(ColumnIndex.java:223)
at org.apache.cassandra.db.ColumnIndex.add(ColumnIndex.java:271)
at org.apache.cassandra.db.ColumnIndex.buildRowIndex(ColumnIndex.java:118)
at
org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:216)
... 14 common frames omitted

Thanks
vaibhav

Reply via email to