Brian,

Sounds good, I'm glad you got it working.

Tim

On Mon, Feb 1, 2021, 10:05 PM Brian Ramprasad <bri...@cs.toronto.edu> wrote:

> Hi Tim,
>
> Thanks for your reply. I actually came across your stack overflow post and
> I tried it out but it didn’t work for me.
>
> I ended up using multiple bridges between datacenter. This allows us to
> have more  than one tcp session to get more acks on the wire. Basically I
> needed to adjust the number of bridges based on the amount of latency I was
> experiencing. It was trial and error approach to determine the correct
> number of bridges.
>
>
> Brian
>
> > On Jan 12, 2021, at 10:20 PM, Tim Bain <tb...@alumni.duke.edu> wrote:
> >
> > Early in my time as an ActiveMQ user, I ran into unexpectedly poor
> > performance between a network of ActiveMQ 5.x brokers across a
> high-latency
> > network link. The particular situation I had involved links that were
> idle
> > long enough for the TCP congestion window to contract back to its min
> value
> > between bursts of traffic, resulting in overall low throughput. The
> > solution ended up being tweaks to the TCP configuration on the hosts in
> > question, though it also turned out that the problem would only occur in
> > certain somewhat unlikely scenarios and was mainly an artifact of my
> > simplistic test case. Full details are at
> >
> https://stackoverflow.com/questions/25494929/3x-latency-observed-in-activemq-only-for-higher-wan-latencies
> > .
> >
> > This may or may not be relevant to your situation (though it's worth
> > investigating), but there's a relevant general conclusion to be drawn:
> > sometimes the root cause of a problem exists outside of the broker
> itself,
> > and so your attempts to characterize the problem need to consider not
> only
> > the broker but also other elements such as the network stack. (Your
> > environment won't give you any control over the network itself from the
> > sound of it, otherwise I'd include that in the list of things to
> > instrument.)
> >
> > Tim
> >
> > On Tue, Jan 12, 2021, 8:23 AM Brian Ramprasad <bri...@cs.toronto.edu>
> wrote:
> >
> >> Hi Everyone,
> >>
> >>
> >> I am trying setup a network of 3 brokers, one in each Amazon datacenter.
> >> The latency between the datacenters  is quite high (25-150ms) and it is
> >> severely impacting the throughput as compared to running all 3 brokers
> in
> >> the same datacenter. The 3 brokers are connected in a pipeline (1->2->3)
> >> using Core bridges.
> >>
> >> I have tried increasing the tcpSendBufferSize and the
> tcpReceiveBufferSize
> >> on the acceptors in the broker.xml but it does not seem to improve the
> >> throughput. When I use a really small buffer, the throughput goes down
> even
> >> further so I know that changing the buffer size is having an effect.
> >>
> >> Does anyone have an idea of what other settings I can try to improve the
> >> throughput?
> >>
> >> Thanks!
> >>
> >> Brian R
> >>
> >>
> >>
>
>

Reply via email to