Dear, I am using 2 live brokers say LIVEA and LIVEB and I am using 2 Backup brokers say BACKUPA and BACKUPB accordingly.
LIVEA and BACKUPA are belongs to group node1 LIVEB and BACKUPB are belongs to group node2 My assumption is that whenever I stop LIVEA, BACKUPA should come live and similarly for other LIVE BACKUP node. but the behaviour is not happening properly Whenever I stop LIVEA - BACKUPA is looking to takeover the broker LIVEB(which is already live) - not sure why it is happening and below is the configuration LIVEA configuration: <ha-policy> <replication> <master> <group-name>Node1</group-name> <check-for-live-server>true</check-for-live-server> </master> </replication> </ha-policy> <connectors> <connector name="netty-connector">tcp://10.12.205.71:61616</connector> <connector name="netty-slave-connector">tcp://10.12.205.72:61616</connector> </connectors> <acceptors> <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <acceptor name="netty-connector">tcp://10.12.205.71:61613?connectionTtl=300000;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> <broadcast-groups> <broadcast-group name="Noqodi-broadcast-group"> <local-bind-address>10.12.205.71</local-bind-address> <local-bind-port>5432</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>100</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> <discovery-groups> <discovery-group name="Noqodi-discovery-group"> <local-bind-address>10.12.205.71</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> </discovery-groups> <cluster-connections> <cluster-connection name="Noqodi-cluster"> <address>jms,queue.test,test</address> <connector-ref>netty-connector</connector-ref> <check-period>1000</check-period> <connection-ttl>600000</connection-ttl> <min-large-message-size>50000</min-large-message-size> <call-timeout>5000</call-timeout> <retry-interval>500</retry-interval> <retry-interval-multiplier>1.0</retry-interval-multiplier> <max-retry-interval>5000</max-retry-interval> <initial-connect-attempts>-1</initial-connect-attempts> <reconnect-attempts>-1</reconnect-attempts> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <confirmation-window-size>20000</confirmation-window-size> <call-failover-timeout>30000</call-failover-timeout> <notification-interval>1000</notification-interval> <notification-attempts>2</notification-attempts> <discovery-group-ref discovery-group-name="Noqodi-discovery-group"/> </cluster-connection> </cluster-connections> ============ Slave configuration: <ha-policy> <replication> <slave> <group-name>Node1</group-name> <allow-failback>true</allow-failback> </slave> </replication> </ha-policy> <connectors> <connector name="netty-connector">tcp://10.12.205.52:61618</connector> </connectors> <acceptors> <acceptor name="artemis">tcp://0.0.0.0:61618?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <acceptor name="netty-connector">tcp://10.12.205.52:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> <broadcast-groups> <broadcast-group name="Noqodi-broadcast-group"> <local-bind-address>10.12.205.52</local-bind-address> <local-bind-port>5432</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>100</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> <discovery-groups> <discovery-group name="Noqodi-discovery-group"> <local-bind-address>10.12.205.52</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> </discovery-groups> <cluster-connections> <cluster-connection name="Noqodi-cluster"> <address>jms,queue.test,test</address> <connector-ref>netty-connector</connector-ref> <check-period>1000</check-period> <connection-ttl>600000</connection-ttl> <min-large-message-size>50000</min-large-message-size> <call-timeout>5000</call-timeout> <retry-interval>500</retry-interval> <retry-interval-multiplier>1.0</retry-interval-multiplier> <max-retry-interval>5000</max-retry-interval> <initial-connect-attempts>-1</initial-connect-attempts> <reconnect-attempts>-1</reconnect-attempts> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>STRICT</message-load-balancing> <max-hops>1</max-hops> <confirmation-window-size>20000</confirmation-window-size> <call-failover-timeout>30000</call-failover-timeout> <notification-interval>1000</notification-interval> <notification-attempts>2</notification-attempts> <discovery-group-ref discovery-group-name="Noqodi-discovery-group"/> </cluster-connection> </cluster-connections> Please guide me what could be the issue here Thanks for the reply - Naveen -- Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html