Just FYI, networkConnectors and the masterslave transport are for making
networks of brokers, which might be networks of failover pairs. If you just
have a single active-passive failover pair, you don't need those things.

Tim

On Wed, Dec 1, 2021, 1:49 AM Simon Lundström <si...@su.se> wrote:

> On Tue, 2021-11-30 at 17:20:31 +0100, Vilius Šumskas wrote:
> >[...]
> > As an alternative, does anybody know if I can use non-HTTP SSL load
> balancer and set client URI to something like ssl://loadbalancer_host:61616
> ? I'm thinking, if slave servers do not respond to the request until they
> become master maybe that would allow me to have a simpler configuration for
> my clients. If I will ever need to add more slaves I would just add them
> under the same load balancer.
>
> Yep. That's what JB meant with "but not bound as the broker is waiting
> for the lock".
>
> Using an external hw LB is how we currently use ActiveMQ and it works
> great. Just make sure that your loadbalancer healthchecks check all the
> protocols you are using and not just one protocol or just pinging.
>
> > If that's possible, which of the methods will be faster? We are
> deploying a point-of-sale application and I want the failover to be done in
> an instant, without losing any transactions (if that's possible :)).
>
> Producers always have to deal with network or mq being down, see
> <https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>.
>
> ActiveMQ is polling to see if current master is up and you don't want to
> poll NFS so often.
>
> How does you application currently deal with the fallacies of
> distributed computing when you're doing syncronous integrations?
>
> BR,
> - Simon
>
> > -----Original Message-----
> > From: Jean-Baptiste Onofré <j...@nanthrax.net>
> > Sent: Tuesday, November 30, 2021 6:01 PM
> > To: users@activemq.apache.org
> > Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
> >
> > Hi,
> >
> > masterslave: transport is deprecated. You can achieve the same with
> randomize=false basically.
> >
> > Correct: updateClusterClientOnRemove is only for network connection, but
> when you have active/active (so a real network).
> >
> > No, the clients won't be stuck: they will reconnect to the new master.
> >
> > Let me illustrate this:
> > - you have a NFS shared filesystem on machine C
> > - machine A mount NFS filesystem (from C) on /opt/kahadb
> > - machine B mount NFS filesystem (from C) on /opt/kahadb
> > - you start brokerA on machineA, brokerA is the master (transport
> connector tcp on 61616)
> > - you start brokerB on machineB, brokerB is a slave (transport connector
> tcp on 61616, but not bound as the broker is waiting for the lock)
> > - in your client connection factory, you configure the broker URL with
> > failover:(tcp://machineA:61616,tcp://machineB:61616)
> > - as brokerA is master, your clients are connected to brokerA
> > - you shutdown brokerA, brokerB will take the lock and become the new
> master
> > - your clients will automatically reconnect to brokerB
> > - you start brokerA, it's now a slave (as the lock is on brokerB)
> >
> > Regards
> > JB
> >
> > On 30/11/2021 09:45, Vilius Šumskas wrote:
> > > Thank you for your response!
> > >
> > > Just out of curiosity, what is this masterslave:() transport is about
> then?
> > >
> > > Also,  if I don't configure network connection will
> updateClusterClientsOnRemove parameter take effect?
> > >
> > > My main concern is that clients will go into stuck state during/after
> the failover. I'm not sure if everything I need is just handle this in the
> code with  TransportListener or do I need to set
> updateClusterClientsOnRemove and updateClusterClients on the broker side to
> make failover smooth?
> > >
>

Reply via email to