The inter broker bridges can be bottle necks as they funnel all messages over a single connection.
Often it is beneficial to partition destinations across multiple network bridges using the destination filter or include/exclude attributes In a simple case where application destinations are easily partitioned; eg: topic:app.X<id>, topic:app.Y.<Id> Configuring two network bridges and confining destinations of a particular name to each would help distribute the load. eg: <networkConnector destinationFilter="app.X.>" duplex="true" ...> <networkConnector destinationFilter="app.Y.>" duplex="true" ...> On 5 April 2012 19:53, Saiky76 <[email protected]> wrote: > Hi - we are trying out a POC with Active MQ for extremely high rate of > traffic. Our most important goal is the ability to scale. The topology > involves topics and for the sake of this post, please consider there is only > one consumer per topic. > > In our reference topology, producers (more than one) keep posting messages > to topics to any broker in the cluster of brokers - using the failover > protocol that is randomized. Similarly the consumers also attempt to consume > messages from any broker in the cluster. Basically, the producers and > consumers are not aware of the topic location on a specific broker. For this > to work, I set up network bridge (duplex) between the brokers. In tests that > had 3 brokers, I ensured all the brokers had a direct network bridge to all > other brokers - to ensure there is only one broker hop for the message to be > forwarded to the broker on which the topic consumer is listening. > > The number of connections (both consumers and producers) on a single broker > never exceeded 250+. > > The vertical scaling results for a single broker is quite impressive. But > once I try my scaling tests, adding 2 or 3 brokers does not help and the > throughput that I got from one broker is exactly the same for the scaling > tests with 2 or 3 brokers. The producers and consumers are randomly writing > and consuming from the brokers as described above. Actually the load is > balanced almost evenly in these writes and consumes. Please note I ensured > enough CPU, memory on the consumer side. > > After doing some analysis, I feel the inter broker communication is > responsible. When I tried with 2 brokers and let us say the total number of > messages consumed is 200k, then the messages exchanged between 2 brokers is > 100k. Similar is the case when 3 brokers are tested - exactly 66% of > additional messages are exchanges between brokers. Looks like each broker > is doing exactly what is max capable of but the additional power of adding > broker is nullified by the network hops. > > Any ideas, any advise would be great? I am wondering whether horizonatal > scalability is actually designed for this. Please note I read the whole of > ActiveMq in action book and literature avaialable online on MQ. Please again > note the additional network hop is always one and that is predictable. > > I am trying this on MQ 5.5.1. Thanks. > > > > -- > View this message in context: > http://activemq.2283324.n4.nabble.com/Inter-Broker-overhead-is-very-high-tp4535723p4535723.html > Sent from the ActiveMQ - User mailing list archive at Nabble.com. -- http://fusesource.com http://blog.garytully.com
