I have 2 Websphere app servers and has the same web application deployed.

On these app servers, I have a separate ActiveMQ process running which is
using 
"Shared File System Master Slave" leveraging a shared NFS.

As a result, at one point, there will be only 1 Master and the other will be
slave.

Suppose Server 1 is the Master and Server 2 is the slave in my example.

When the web app started, my application is able to connect to activemq to
put a JMS message or consume a JMS message on the queue by either
application.  However, when I kill the "primary" activemq process on Server
1, the activemq process on app server 2 becomes the master which is the
correct expected behaviour.  

However, the web application on app server 1 is not able to put a JMS
message or consume a JMS message from activemq anymore. I always have to
restart the JVM on app server 1 in order to make it functional.


On the web app itself, I have the following already configured:
failover:(tcp://server01:61617,tcp://server2:61617)?randomize=false&maxReconnectAttempts=10

But it seems to me that, the web app does not connect to the new "primary"
activemq automatically.

Can anyone shed some lights on this in how to make the web app works without
restarting the JVM?

Many thanks

-- 
View this message in context: 
http://old.nabble.com/Application-not-able-to-connect-to-new-master-tp28076446p28076446.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to