Mike Miljour wrote:
After further investigation, it turns out there was a configuration issue,
which could have been avoided with clearer documentation. (it might have
helped if i had included my configuration as well!)  We had set the value
for broker name differently in our two running instances of ActiveMQ.  Doing
this caused the ActiveMQs to act as though they were load balancing instead
of acting as Master and slave (which was our intent).
Suggested documentation changes:In the schema reference for brokerName,
change the description from: Sets the name of this broker; which must be
unique in the network
to:
Sets the name of this broker; which must be unique in the network, except
for master-slave configurations, where it must be the same

Also, in the master slave shared file system documentation, include a note
stating that the WebConsole will not load for the slave until it becomes the
master if the setup is done correctly.  Also mention that the value for
brokerName must be the same for the master and all slaves.

What does "if the setup is done correctly" means? Documentation states:

"Whilst a Slave is actively connected to the Master - it does not allow or start any network or transport connectors, it's sole purpose is to duplicate the state of the master."

I am using the same name on both master and slave, if I try to consume from the slave while the master is active it doesn't consume messages, which is good. But if I produce against the Slave it accepts messages, it doesn't rely them to the consumers but it does accepts them.

The problem with this is if there were a network problem and a producer connects to a Slave while the master is active, while the failover transport has some properties such as maxReconnectAttempts, maxReconnectDelay, etc. they seem to have effect if both Master and Slave fail (I'm referring to a Pure Master-Slave conf). Any ideas?

Thx,
Eric

Reply via email to