As background, there is an activemq cluster (5.13.2 on CentOS 6.7, star
topology) here to support mcollective. One datacenter is on the other side of
the Atlantic and every time inter-datacenter connectivity is interrupted we see
this prefetch log fragment and the clustering to that activemq instance stops
working.
INFO | jvm 1 | 2016/04/22 21:47:45 | WARN | TopicSubscription:
consumer=mcomq4.me.eu->mcomq3.me.com-43531-1461361657724-32:1:1:1,
destinations=75, dispatched=1000, delivered=0, matched=1001, discarded=0: has
twice its prefetch limit pending, without an ack; it appears to be slow
How would I get the activemq initiating the connection to stop clogging like
this and just try to reconnect periodically?
Or is it even reasonable to cluster activemq between datacenters?
More:
I haven't found any activemq.xml setting which reads like "automatically try to
reconnect" or "just throw away older messages". There are activemq.xml bits
below.
The clustering works well until that log line. The actual instance in Europe
and daemons connecting to it work just fine after the log line above as long as
I keep my requests local to that datacenter.
Bits from activemq.xml:
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="false"
usePrefetchExtension="false">
<messageEvictionStrategy>
<oldestMessageEvictionStrategy/>
</messageEvictionStrategy>
<pendingMessageLimitStrategy>
<prefetchRatePendingMessageLimitStrategy multiplier="2"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue="*.reply.>" gcInactiveDestinations="true"
inactiveTimoutBeforeGC="300000" />
</policyEntries>
</policyMap>
</destinationPolicy>
<networkConnectors>
<networkConnector
name="mcomq4.me.eu-mcomq3.me.com-topics"
uri="static:(ssl://mcomq3.me.com:61617)"
userName="amq"
password="password"
duplex="true"
decreaseNetworkConsumerPriority="true"
networkTTL="3"
dynamicOnly="true">
<excludedDestinations>
<queue physicalName=">" />
</excludedDestinations>
</networkConnector>
<networkConnector
name="mcomq4.me.eu-mcomq3.me.com-queues"
uri="static:(ssl://mcomq3.me.com:61617)"
userName="amq"
password="password"
duplex="true"
decreaseNetworkConsumerPriority="true"
networkTTL="3"
dynamicOnly="true"
conduitSubscriptions="false">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="stomp+nio+ssl"
uri="stomp+ssl://0.0.0.0:61614?needClientAuth=true&transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2&transport.hbGracePeriodMultiplier=5"/>
<transportConnector name="openwire+nio+ssl"
uri="ssl://0.0.0.0:61617?needClientAuth=true&transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2"/>
</transportConnectors>
Other things I've read to try and understand this:
https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.0/html-single/Using_Networks_of_Brokers/index.html
https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.0/html-single/Tuning_Guide/
http://activemq.apache.org/slow-consumer-handling.html
My previous threads elsewhere, when I did not understand that it was a specific
network event causing clustering to break:
https://groups.google.com/forum/#!topic/mcollective-users/MkHSVHt9uEI
https://groups.google.com/forum/#!topic/mcollective-users/R2mEnuV5eK8