We have a network of brokers:

Stomp producers -> 200+ remote brokers -> bridged queue -> 1 central
broker -> ~128 Java consumers

The central broker is 5.8.0 and the remote brokers are a mix of 5.X.

There are latency and throughput differences between the remote
brokers and the central broker.  Some are sub-ms with a large pipe and
some are 100+ms and <2mbps.

Under normal circumstances, the consumers (in total) are processing
roughly 3,000 persistent msgs/second.

The central broker has a small memoryLimit (4m) set on this queue to
force the messages to queue on the remote brokers.  This way if we
have a problem with the central broker (kahadb corruption, host down,
etc.) data loss is minimal.  We are also using message groups on these
messages.  The central broker has its persistent store on a ram disk
so that it can process at this rate.

The max throughput of the central broker is 4-5k messages/second with
this configuration, or, about 1.5 times our sustained message
throughput.

As such, when there is a problem with either the central broker, the
consumers, or the downstream persistent storage, we back up very
quickly.  For example, if the central broker is down for 10 minutes,
it will take roughly 20 minutes to catch up (since we're still
generating 3k msgs/second on the remote brokers).

Give above, I was wondering:

1) During this 'catch up' period, how does ActiveMQ decide which
remote broker to accept messages from?  What we see is some of the
remote brokers seem to starve until we drop below maximum processing
capacity.  For example, remote broker 1 will not have any (or very
few) messages in its backlog processed whereas broker 2 will fully
drain.  Another way to ask this might be how does the central broker
decide which producer to block (flow control)?

2) Is there a better way to configure the central broker to tell it
run without a local disk store even though the messages are sent
persistent?  We want them persistent on the remote brokers, but, not
on the central broker.

Thanks,

--Kevin

Reply via email to