[
https://issues.apache.org/jira/browse/CASSANDRA-19784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17922048#comment-17922048
]
Stefan Miklosovic commented on CASSANDRA-19784:
-----------------------------------------------
This is most probably some kind of a variation of CASSANDRA-20147 or a
duplicate issue.
> Commitlog leak leads to multi-node outage
> -----------------------------------------
>
> Key: CASSANDRA-19784
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19784
> Project: Apache Cassandra
> Issue Type: Bug
> Components: Local/Commit Log
> Reporter: Dan Sarisky
> Priority: Normal
> Fix For: 4.1.x, 5.0.x
>
>
> After days of sustained write traffic, our nodes get into a state where
> hundreds of the below commitlog error are happening a second. The node
> becomes basically unwritable. This happens to multiple nodes at a time,
> effectively causing the entire cluster to be unusable.
> java.lang.Error: Maximum permit count exceeded
> at
> java.base/java.util.concurrent.Semaphore$Sync.tryReleaseShared(Semaphore.java:198)
> at
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1382)
> at java.base/java.util.concurrent.Semaphore.release(Semaphore.java:619)
> at
> org.apache.cassandra.db.commitlog.AbstractCommitLogService.requestExtraSync(AbstractCommitLogService.java:297)
> at
> org.apache.cassandra.db.commitlog.BatchCommitLogService.maybeWaitForSync(BatchCommitLogService.java:40)
> at
> org.apache.cassandra.db.commitlog.AbstractCommitLogService.finishWriteFor(AbstractCommitLogService.java:284)
> at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:330)
> at
> org.apache.cassandra.db.CassandraKeyspaceWriteHandler.addToCommitLog(CassandraKeyspaceWriteHandler.java:100)
> at
> org.apache.cassandra.db.CassandraKeyspaceWriteHandler.beginWrite(CassandraKeyspaceWriteHandler.java:54)
> at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:641)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:525)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:228)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:248)
> at
> org.apache.cassandra.service.StorageProxy$4.runMayThrow(StorageProxy.java:1652)
> at
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2611)
> at
> org.apache.cassandra.concurrent.ExecutionFailure$2.run(ExecutionFailure.java:163)
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:142)
> at
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.base/java.lang.Thread.run(Thread.java:829)
> This happens on large 80 core machines and with the following yaml settings.
> commitlog_sync: batch
> concurrent_writes: 640
> native_transport_max_threads: 640
> This happens on 4.1.5 and 5.0beta1. I have not tested other branches.
> I am NOT able to reproduce it with smaller 12 core machines.
> I am NOT able to reproduce it with commitlog_sync: periodic
> I added some small instrumentation to periodically printout the value of
> org.apache.cassandra.db.commitlog.AbstractCommitLogService.haveWork.permits().
> Under sustained write traffic, this value continually grows (by a million
> or so a minute using my cassandra-stress workload) until it hits MAXINT and
> the error starts occuring. If write traffic slows, the value of
> haveWork.permits() drops.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]