Hello Oleg,

Have a read over 
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html

and perhaps try the suggested logging configuration. It should help you to 
figure out why journal files aren't getting deleted.

Hope this helps,

Torsten Mielke
tors...@fusesource.com
tmie...@blogspot.com


On Sep 9, 2012, at 1:45 PM, Oleg Dulin wrote:

> Dear Distinguished Colleagues:
> 
> Here is my use case.
> 
> For the most part my consumers are able to keep up with the workload, but 
> once a week or so there is a huge burst of messages (millions) that need to 
> be queued up for processing. We don't want producers blocked, so we want all 
> the messages get queued up as fast as they come in even if it takes us longer 
> to actually process them. Here are the policy entries:
> 
> <broker xmlns="http://activemq.apache.org/schema/core";
>               brokerName="localhost" dataDirectory="./activemq-data"
>               destroyApplicationContextOnStop="true" persistent="true" 
> useJmx="true">
> ….
> <policyEntries>
>                                       <policyEntry topic=">" 
> producerFlowControl="false"
>                                               memoryLimit="64mb">
>                                               <pendingSubscriberPolicy>
>                                                       <fileCursor />
>                                               </pendingSubscriberPolicy>
>                                       </policyEntry>
>                                       <policyEntry queue=">" 
> producerFlowControl="false"
>                                               memoryLimit="64mb">
>                                               <pendingQueuePolicy>
>                                                       <fileQueueCursor />
>                                               </pendingQueuePolicy>
>                                       </policyEntry>
>                               </policyEntries>
> ….
> <persistenceAdapter>
>                       <kahaDB 
> directory="${activemq.base}/activemq-data/kahadb" />
>               </persistenceAdapter>
> 
> 
> What happens is that after all is done processing and queue sizes are down to 
> 0 (I checked in jconsole), the activemq-data directory is still consuming a 
> couple of dozen gigabytes of disk space. Isn't ActiveMQ supposed to clean up 
> after itself ? I know it does because at some point the disk utilization 
> grows to almost a 100 gig, but then it shrinks back down to 30 or so -- and 
> stays there. Why is that ?
> 
> We are using AMQ 5.5.1, embedded broker.
> 
> Any input is greatly appreciated.
> 
> Regards,
> Oleg
> 
> 





Reply via email to