Hi Everyone,

I am trying setup a network of 3 brokers, one in each Amazon datacenter. The 
latency between the datacenters  is quite high (25-150ms) and it is severely 
impacting the throughput as compared to running all 3 brokers in the same 
datacenter. The 3 brokers are connected in a pipeline (1->2->3) using Core 
bridges. 

I have tried increasing the tcpSendBufferSize and the tcpReceiveBufferSize on 
the acceptors in the broker.xml but it does not seem to improve the throughput. 
When I use a really small buffer, the throughput goes down even further so I 
know that changing the buffer size is having an effect. 

Does anyone have an idea of what other settings I can try to improve the 
throughput?

Thanks!

Brian R


Reply via email to