Hi,

masterslave: transport is deprecated. You can achieve the same with randomize=false basically.

Correct: updateClusterClientOnRemove is only for network connection, but when you have active/active (so a real network).

No, the clients won't be stuck: they will reconnect to the new master.

Let me illustrate this:
- you have a NFS shared filesystem on machine C
- machine A mount NFS filesystem (from C) on /opt/kahadb
- machine B mount NFS filesystem (from C) on /opt/kahadb
- you start brokerA on machineA, brokerA is the master (transport connector tcp on 61616) - you start brokerB on machineB, brokerB is a slave (transport connector tcp on 61616, but not bound as the broker is waiting for the lock) - in your client connection factory, you configure the broker URL with failover:(tcp://machineA:61616,tcp://machineB:61616)
- as brokerA is master, your clients are connected to brokerA
- you shutdown brokerA, brokerB will take the lock and become the new master
- your clients will automatically reconnect to brokerB
- you start brokerA, it's now a slave (as the lock is on brokerB)

Regards
JB

On 30/11/2021 09:45, Vilius Šumskas wrote:
Thank you for your response!

Just out of curiosity, what is this masterslave:() transport is about then?

Also,  if I don't configure network connection will 
updateClusterClientsOnRemove parameter take effect?

My main concern is that clients will go into stuck state during/after the 
failover. I'm not sure if everything I need is just handle this in the code 
with  TransportListener or do I need to set updateClusterClientsOnRemove and 
updateClusterClients on the broker side to make failover smooth?

Reply via email to