that means youre producing messages faster than they are being consumed.

you could setup a policy entry to turn off flow control

<policyEntry topic="topicname" producerFlowControl="false" memoryLimit="10mb"/>

<policyEntry queue="queuename" producerFlowControl="false" memoryLimit="10mb"/>

however, if your consumers never catch up, then you need to decide what you wanna do with your messages, as eventually you will run out of resources, being it memory or disk space that holds all the messages.

the <policyEntry> element has a bunch of sub element that you can setup to control other needs

Filip

jaya_srini wrote:
hi
We are using ActiveMQ 5.0 release and observing the following on production.
After a certain number of messages are sent the Activemq send blocks. The
thread dump produced the following

daemon prio=6 tid=0x3793f400 nid=0x1f28 waiting for monitor entry
[0x38aff000..0x38affc98]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.activemq.ActiveMQSession.send(ActiveMQSession.java:1587)
        - waiting to lock <0x07c45ea0> (a java.lang.Object)
        at
org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:226)
        at
org.apache.activemq.ActiveMQMessageProducerSupport.send(ActiveMQMessageProducerSupport.java:268)
        at
org.apache.activemq.ActiveMQTopicPublisher.publish(ActiveMQTopicPublisher.java:146)


The connection URI looks like the following

failover:(tcp://10.11.12.13:61616?wireFormat.maxInactivityDuration=-1)

I am not sure if jms.useAsyncSend=true or jms.dispatchAsync=true will work
with a failover transport.
Can someone please help me troubleshoot this? Will increasing the memory
limit on the broker help?


Reply via email to