Yup, it turns out there was a logic error in app2 causing this problem. As for app1 hitting producer flow control, it's not normally expected, but it was a frequent occurrence recently given the increased load on app2.
Thanks for your help. ceposta wrote > You may want to also look at app2 and see what it's doing and why its not > receiving or ack'ing messages. Is the app1 hitting producer flow control > expected? > > > On Mon, Aug 5, 2013 at 2:05 PM, Jmal < > jm15119b@ > > wrote: > >> I placed the broker in debug mode, and did not notice any log messages >> pertaining to reaching the limit, and/or the broker waiting for consumer >> responses. Would DEBUG log level output a particular message(s) giving a >> clue about this problem? >> >> Thanks for the help. >> >> ceposta wrote >> > You'll have to check to see what the broker thinks is going on. So if >> the >> > broker dispatches messages up to the consumer prefetch limit, and the >> > consumer hasn't sent ack's back, the broker will not try to send any >> more >> > messages and your consumers will look hung. >> > >> > >> > On Thu, Aug 1, 2013 at 5:27 PM, Jmal < >> >> > jm15119b@ >> >> > > wrote: >> > >> >> Hello, I'm hoping that someone would be able to help me understand why >> I >> >> am >> >> having an issue with my applications usage of ActiveMQ under flow >> control >> >> situations. I'm using 2 ActiveMQ 5.4.3 brokers using Open JDK 1.6, >> >> configured with 5 queues each. I have 2 applications in separate JVMs >> >> connecting to each broker independently (app1 connects to broker1 & >> >> broker2) >> >> and (app2 connects to broker2). app1 pushes messages to 1 or more >> queues >> >> on >> >> broker2 using the broker2's nio URI. app2 reads and processes those >> >> messages on broker2. All works well, until broker's 1 & 2 memory >> usage >> >> reaches 100% (as observed by the admin UI). Then app1's connection to >> >> broker2 is placed in flow control as expected. The weird part is that >> >> app2 >> >> is no longer able to read any messages from broker2 which is very >> >> unexpected. If I understand correctly, producer flow control is only >> >> suppose to slow down the producers from app1's connection to broker2, >> and >> >> not app2's connection to broker2? If this is true, is there a known >> bug >> >> with the version of ActiveMQ I'm using or have I mis-configured the >> >> broker? >> >> Below is the config used for both brokers, and each is running on 2 >> >> separate >> >> EC2 instances with 7GB of memory. Any help will be greatly >> appreciated. >> >> >> >> Thanks >> >> >> > > <beans >> > >> >> > xmlns="http://www.springframework.org/schema/beans" >> >> xmlns:amq="http://activemq.apache.org/schema/core" >> >> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >> >> xsi:schemaLocation="http://www.springframework.org/schema/beans >> >> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd >> >> http://activemq.apache.org/schema/core >> >> http://activemq.apache.org/schema/core/activemq-core.xsd"> >> >> >> >> >> > > <broker >> > >> >> > xmlns="http://activemq.apache.org/schema/core" >> >> brokerName="localhost" >> >> dataDirectory="/usr/data/activemq" >> >> destroyApplicationContextOnStop="true" >> >> persistent="true"> >> >> >> >> >> > > <destinationPolicy> >> >> >> > > <policyMap> >> >> >> > > <policyEntries> >> >> >> > > <policyEntry topic="> >> > " producerFlowControl="true"> >> >> >> > > <pendingSubscriberPolicy> >> >> >> > > </pendingSubscriberPolicy> >> >> >> > > </policyEntry> >> >> >> > > <policyEntry queue="> >> > " producerFlowControl="true"> >> >> >> > > <pendingQueuePolicy> >> >> >> > > </pendingQueuePolicy> >> >> >> > > </policyEntry> >> >> >> > > </policyEntries> >> >> >> > > </policyMap> >> >> >> > > </destinationPolicy> >> >> >> >> >> > > <managementContext> >> >> >> > > <managementContext createConnector="true" rmiServerPort="61516" >> > >> >> > /> >> >> >> > > </managementContext> >> >> >> >> >> > > <persistenceAdapter> >> >> >> > > <kahaDB >> > >> >> > directory="/usr/data/activemq/kahadb" >> >> enableJournalDiskSyncs="false" >> >> indexWriteBatchSize="10000" >> >> indexCacheSize="1000" >> >> /> >> >> >> > > </persistenceAdapter> >> >> >> >> >> > > <systemUsage> >> >> >> > > <systemUsage sendFailIfNoSpaceAfterTimeout="5000"> >> >> >> > > <memoryUsage> >> >> >> > > <memoryUsage limit="4096 mb"/> >> >> >> > > </memoryUsage> >> >> >> > > <storeUsage> >> >> >> > > <storeUsage limit="10240 mb"/> >> >> >> > > </storeUsage> >> >> >> > > <tempUsage> >> >> >> > > <tempUsage limit="100 mb"/> >> >> >> > > </tempUsage> >> >> >> > > </systemUsage> >> >> >> > > </systemUsage> >> >> >> >> >> > > <sslContext> >> >> >> > > <sslContext keyStore="keystore" keyStorePassword="abcd1234" >> > >> >> > trustStore="truststore" trustStorePassword="efgh5678"/> >> >> >> > > </sslContext> >> >> >> > > <transportConnectors> >> >> >> > > <transportConnector name="ssl" uri="ssl://0.0.0.0:61616"/> >> >> >> > > <transportConnector name="nio" uri="nio://0.0.0.0:61617"/> >> >> >> > > </transportConnectors> >> >> >> >> >> > > </broker> >> >> >> >> >> >> >> > > <import resource="jetty.xml"/> >> >> >> >> >> > > </beans> >> >> >> >> >> >> >> >> -- >> >> View this message in context: >> >> >> http://activemq.2283324.n4.nabble.com/Help-with-a-producer-consumer-stall-problem-after-reaching-flow-control-tp4670034.html >> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com. >> >> >> > >> > >> > >> > -- >> > *Christian Posta* >> > http://www.christianposta.com/blog >> > twitter: @christianposta >> >> >> >> >> >> -- >> View this message in context: >> http://activemq.2283324.n4.nabble.com/Help-with-a-producer-consumer-stall-problem-after-reaching-flow-control-tp4670034p4670103.html >> Sent from the ActiveMQ - User mailing list archive at Nabble.com. >> > > > > -- > *Christian Posta* > http://www.christianposta.com/blog > twitter: @christianposta -- View this message in context: http://activemq.2283324.n4.nabble.com/Help-with-a-producer-consumer-stall-problem-after-reaching-flow-control-tp4670034p4670137.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.