We're running a webapp on Tomcat 5.5.20 that uses ActiveMQ 4.1.1 (and JDK 1.4.2). The problem we're seeing is that when Tomcat shuts down, the java process is left hanging. Thread stack dump shows many waiting non-daemon threads of the following type:
"ActiveMQ Transport: tcp://devtest148/192.168.1.148:61616" prio=5 tid=0x503f94b0 nid=0x1028 in Object.wait() [f97f000..f97fdbc] at java.lang.Object.wait(Native Method) - waiting on <0x1bbe47f8> (a edu.emory.mathcs.backport.java.util.concurrent.CountDownLatch) at java.lang.Object.wait(Object.java:429) at edu.emory.mathcs.backport.java.util.concurrent.CountDownLatch.await(CountDownLatch.java:178) - locked <0x1bbe47f8> (a edu.emory.mathcs.backport.java.util.concurrent.CountDownLatch) at org.apache.activemq.network.DemandForwardingBridgeSupport.waitStarted(DemandForwardingBridgeSupport.java:842) at org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:332) at org.apache.activemq.network.DemandForwardingBridgeSupport$2.onCommand(DemandForwardingBridgeSupport.java:131) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:95) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:65) at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:133) at org.apache.activemq.transport.InactivityMonitor.onCommand(InactivityMonitor.java:122) at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:84) at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:137) at java.lang.Thread.run(Thread.java:534) What seems to be happening is that we have something like a dozen people across the network doing testing/debugging of this application at the same time, and their ActiveMQ brokers are all finding and making connections to each other --- which is fine --- but then these connections are not being closed cleanly on shutdown --- which is a problem. This seems to be related to the multicast://default setting; here's our activemq.xml config file: <beans xmlns="http://activemq.org/config/1.0"> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"/> <broker brokerName="testserver" useJmx="false"> <persistenceAdapter> <journaledJDBC journalLogFiles="5" dataDirectory="./activemq-data"/> </persistenceAdapter> <transportConnectors> <transportConnector name="default" uri="tcp://localhost:61616" discoveryUri="multicast://default"/> </transportConnectors> <networkConnectors> <networkConnector name="default" uri="multicast://default" failover="false"/> </networkConnectors> <plugins> <!-- use JAAS to authenticate using the ldap_msad.conf file to configure JAAS --> <jaasAuthenticationPlugin configuration="activemq" /> </plugins> </broker> </beans> We added code to individually stop all the broker's connections, and then the broker, and then the broker service on shutdown, and that seemed to help on faster machines; but on slower machines we noted that the Multicast Discovery Agent Notifier was still finding and creating connections to other servers, even while this shutdown was going on. I noted in this latest thread stack dump that the Multicast Discovery Agent Notifier is also still running (see below), and I'm wondering if getting a handle to this and shutting it down explicitly will help. Can anyone tell me how to get a handle to it? Any other suggestions? I've looked at the user mailing list but haven't found anything quite like this situation, apologies if I've missed something, but either way thanks very much in advance for any help! regards, Andrew "Multicast Discovery Agent Notifier" daemon prio=5 tid=0x07b96b68 nid=0x13f8 in Object.wait() [f3bf000..f3bfdbc] at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:429) at edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:316) - locked <0x19774640> (a edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue$SerializableLock) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:493) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:689) at java.lang.Thread.run(Thread.java:534) -- View this message in context: http://www.nabble.com/Tomcat-shutdown-fails-with-multicast%3A--default-tf4530674s2354.html#a12929144 Sent from the ActiveMQ - User mailing list archive at Nabble.com.