I filed https://issues.apache.org/jira/browse/AMQ-5342 with the configuration
of the brokers, consumer, producer and full thread dump.
The two brokers are running in their own JVMs connecting to each other over
tcp open wire. Internally the brokers create vm transports to bridge among
their connec
Hmm, it looks like there are two reentrant locks.
Are there 2 brokers running in the same JVM connecting over a network
connector with the VM transport?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/JMS-to-JMS-bridge-reconnection-dispatching-not-working-in-simple-condi
Good catch. Something looks very wrong with this deadlock picture. Both
threads are blocking on this call:
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
Which is this instruction:
writeLock.lock();
Which is this method call:
java.util.concurrent.locks.
Actually it has caused problems with my integration test environments
because is completely unexpected and Spring doesn't now about kill -9, it
only knows about calling the shutdown hooks.
I ran JConsole and immediately detected this deadlock:
Name: ActiveMQ Transport: tcp://localhost/127.0.0.1:6
The issue of "connection takes a long time to shutdown" has been around a
long time. I've seen at least one fix for it recently, but it may still
have other causes.
If you can track down the cause and (even better) provide a patch, it would
be greatly appreciated.
Once reproduced, the next step
I ran the same topology with everything in the same server, both brokers,
producer and multi-threaded consumer. I can reproduce this EVERY SINGLE
TIME.
I tried with 5.9.1 and see exactly the same behavior. I tried with 5.6
(which is the version I run in production) where I know this basic topology