The journal files were cut down in size to avoid running into the issue, but 
there is still potential to have a message on the DLQ that is a 'useful 
artifact' which is in the journal, no?

<context:property-placeholder system-properties-mode="OVERRIDE"/>-<broker 
tmpDataDirectory="${ACTIVEMQ_STORE_DIR}/data/tmp_storage_dir" persistent="true" 
schedulerSupport="false" useJmx="true" 
brokerName="web-console">-<managementContext><managementContext 
createMBeanServer="false" 
createConnector="false"/></managementContext>-<persistenceAdapter><kahaDB 
journalMaxFileLength="5mb" checkForCorruptJournalFiles="true" 
ignoreMissingJournalfiles="true" checksumJournalFiles="true" 
archiveCorruptedIndex="false" 
directory="${ACTIVEMQ_STORE_DIR}/data/kahadb"/></persistenceAdapter>-<transportConnectors><transportConnector
 uri="tcp://localhost:${OPENWIRE_PORT}" 
name="openwire"/></transportConnectors>-<systemUsage>-<systemUsage 
sendFailIfNoSpace="true">-<memoryUsage><memoryUsage limit="100 
mb"/></memoryUsage>+<storeUsage>-<tempUsage><tempUsage limit="500 
mb"/></tempUsage></systemUsage></systemUsage></broker></beans:beans>

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501


-----Original Message-----
From: Christian Posta [mailto:[email protected]] 
Sent: Thursday, November 21, 2013 6:13 PM
To: [email protected]
Subject: Re: Producer Flow Block - Consumer Deadlock after max memory limits 
exceeded

Inline...

On Thu, Nov 21, 2013 at 10:51 AM,  <[email protected]> wrote:
> Version: Active MQ v5.8
> Embedded Broker, Producer, Consumer all within same JVM
>
> If max memory limits are set to 320MB, which equates to 10 journal files 
> (32MB per file), the files cannot be cleared even if there is 1 message on 
> the DLQ.

So you might need to post your config (or show the code for your
config if embedded). "Memory Limits" set to 320MB isn't the same thing
as "Store Limits" set to 320MB with 32MB journal files. Individual
files will be cleared out if there are no useful artifacts in them
(messages, durable subscription info, producer audit data structures,
etc...). The default cleanup period is 30s:

eg:

<kahaDB cleanupInterval="30000" ..>



>This 1 message blocks the freeing up of the journal file where it resides.  In 
>order to resolve this, the JVM is >recycled.  I'm sure there is a better way 
>of resolving this issue.  Any advice?

Are producer/consumer using same connection? What ack mode is your
consumer using?

Since this is embedded (broker,producer,consumer) it should be easy
enough to extract out the salient points and put together a unit test.
If you provide something concrete like that, I can take a look and
tell you exactly what's happening.


>
> Regards,
>
> Barry Barnett
> WMQ Enterprise Services & Solutions
> Wells Fargo
> Cell: 704-564-5501
>
>
>



-- 
Christian Posta
http://www.christianposta.com/blog
twitter: @christianposta

Reply via email to