ActiveMQ 5.13.4 with mkahadb storage.
We see this several times: ActiveMQ runs well for some days. From a certain
point, the thread numbers starts to increasing without additional clients.
In the meantime, the log get flooded by connection 'taking a long time to
shutdown' like these:
2016-09-08 17:44:23,544 | INFO  | The connection to
'tcp://123.249.8.90:31850' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 464480
2016-09-08 17:44:23,546 | INFO  | The connection to
'tcp://123.249.8.67:57283' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 474168
2016-09-08 17:44:23,547 | INFO  | The connection to
'tcp://103.61.136.57:37710' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 466022
2016-09-08 17:44:23,549 | INFO  | The connection to
'tcp://123.249.8.87:45330' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ Transport:
tcp:///123.249.8.87:4
9201@61616
2016-09-08 17:44:23,551 | INFO  | The connection to
'tcp://123.249.8.14:37888' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ Transport:
tcp:///123.249.8.14:4
1716@61616
2016-09-08 17:44:24,106 | INFO  | The connection to
'tcp://123.249.8.16:50328' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ Transport:
tcp:///123.249.8.16:4
8403@61616
2016-09-08 17:44:24,107 | INFO  | The connection to
'tcp://123.249.8.14:48463' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 460462
2016-09-08 17:44:24,109 | INFO  | The connection to
'tcp://123.249.8.35:45239' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ Transport:
tcp:///123.249.8.35:4
6988@61616
2016-09-08 17:44:24,119 | INFO  | The connection to
'tcp://27.50.133.155:33228' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 461267
2016-09-08 17:44:24,139 | INFO  | The connection to
'tcp://27.50.130.80:52197' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 467634
2016-09-08 17:44:24,145 | INFO  | The connection to
'tcp://221.235.189.202:34412' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 472760
2016-09-08 17:44:24,434 | INFO  | The connection to
'tcp://123.249.8.61:44765' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ NIO Worker 468617
2016-09-08 17:44:24,598 | INFO  | The connection to
'tcp://123.249.8.60:55858' is taking a long time to shutdown. |
org.apache.activemq.broker.TransportConnection | ActiveMQ Transport:
tcp:///123.249.8.60:3
8970@61616

All we can do is restarting activemq then it goes well for a few days and
then it happens again. I got the thread dump when the problem happens. I had
a check and found the most problematic threads are 'ActiveMQ NIO Worker'
comparing with a normal thread dump. Those threads are all in WAITING state
like this:
"ActiveMQ NIO Worker 474441" daemon prio=10 tid=0x00007f282841b000 nid=0x3d8
waiting on condition [0x00007f25f0e87000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x00007f2b00a1f6b8> (a
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
        at
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
        at
org.apache.activemq.store.kahadb.KahaDBStore.getLastProducerSequenceId(KahaDBStore.java:1129)
        at
org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter.getLastProducerSequenceId(KahaDBPersistenceAdapter.java:159)
        at
org.apache.activemq.store.kahadb.MultiKahaDBPersistenceAdapter.getLastProducerSequenceId(MultiKahaDBPersistenceAdapter.java:267)
        at
org.apache.activemq.broker.TransportConnection.getProducerBrokerExchange(TransportConnection.java:1486)
        - locked <0x00007f2af3648308> (a java.util.HashMap)
        at
org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:544)
        at
org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:768)
        at
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:338)
        at
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:188)
        at
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
        at
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:125)
        at
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:300)

I grep the dump file and found they are all waiting for a same
indexLock.writeLock() but I can not find which thread holds the lock:
$ grep 0x00007f2b00a1f6b8 stack0908.log
        - parking to wait for  <0x00007f2b00a1f6b8> (a
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
        - parking to wait for  <0x00007f2b00a1f6b8> (a
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
        - parking to wait for  <0x00007f2b00a1f6b8> (a 
..............................................
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
        - parking to wait for  <0x00007f2b00a1f6b8> (a
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
$ grep 0x00007f2b00a1f6b8 stack0908.log |wc -l
576

This is the thread dump file:
stack0908.zip
<http://activemq.2283324.n4.nabble.com/file/n4716353/stack0908.zip>  



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Too-many-ActiveMQ-NIO-Worker-threads-in-WAITING-state-tp4716353.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to