I'd strongly recommend setting the net.ipv4.tcp_slow_start_after_idle = 0
kernel parameter for any machine on either end of a high-latency network
link; you don't really want the congestion window closing simply because
the link isn't being used, but this is especially troublesome for
high-latency links because it can take so much time for the window to
re-open due to the high round-trip time.  (We discovered the need for that
kernel parameter after observing performance of a broker-to-broker
connection across a WAN emulator configured for a high-latency connection.)

I'd only consider the load balancers if sessions are sticky over a long
period of time; ActiveMQ's connections are stateful, unlike HTTP requests
(for example), so you want to make sure that the load balancer keeps a
client connected to the same broker for the long haul.  I'd probably
consider load-balancing with the URI options in the failover transport
instead, since they give you the ability to rebalance clients if a node
rejoins a cluster...  But that makes me realize: what's the point of a load
balancer in front of a master/slave cluster?  Only the master is active, so
everyone's connected to it, until it fails and the slave becomes the new
master.  What are your F5s buying you here?  (I'd actually worry that
they'd get in the way of the failover process, since they would think they
know more about the current state of the cluster than they actually would.)

On Fri, Mar 13, 2015 at 8:09 AM, glenn.struzinski <
glenn.struzin...@oracle.com> wrote:

> Let me also add the systems at each datacenter are behind F5 load
> balancers.
> Is this a good practice or should we avoid the load balancers?
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-of-brokers-with-multiple-worldwide-data-centers-tp4693158p4693160.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Reply via email to