You probably already figured this out, but that setting should be on any
machines at either end of a connection across a high-latency network link.
So definitely your brokers, but also any hosts of consumers that connect to
a broker across a high-latency link.

This setting is especially important if your broker-to-broker
networkConnectors are not duplex (i.e. each broker starts a simplex
networkConnector to the other broker), because scenarios can occur where
one connection is idle while the other one is active and then they switch,
so the TCP window on the idle connection will be closing while it's idle.
You don't want that unless you have variable-quality network links (or
variable network routing), and even then, the TCP window will close on its
own once you start sending data at a degraded rate, so there's really no
reason for the default to be set as it is.

Tim

On Mon, Mar 16, 2015 at 6:42 AM, glenn.struzinski <
glenn.struzin...@oracle.com> wrote:

> Thank you for the reply Tim.
>
> I think we are going with a maser/slave setup at each dc but the main hub
> datacenter will have the config that notes all other data centers.  Is it
> recommended to do the same with the other activemq.xml configs to point
> back
> to the hub, in case the hub goes down?
>
> We are trying to make this as robust as possible.
>
> We will implement the kernel setting you recommended.  Is that just on the
> activemq servers?
>
> I also think that since our puppet module has set forth the server name as
> the f5 vip name we will just filter traffic thru the f5 unless it begins to
> give us trouble.  Otherwise we would have to redo the configs on 7 data
> centers and over 10000 systems.  (which i am sure could be done with
> puppet).
>
> Thank you
> Glenn Struzinski
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-of-brokers-with-multiple-worldwide-data-centers-tp4693158p4693281.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Reply via email to