Dear Colleagues:

We have about 1500 queues on the broker. Some queues are consumed in a "batch" fashion. So the journal gets fragmented and becomes quite large. To optimize disk usage, I've done the following:

            <persistenceAdapter>
                <mKahaDB directory="activemq-data/kahadb">
                       <filteredPersistenceAdapters>
                                 <filteredKahaDB perDestination="true" >
                                       <persistenceAdapter>
<kahaDB journalMaxFileLength="16mb" cleanupInterval="10000"/>
                                       </persistenceAdapter>
                                 </filteredKahaDB>
                       </filteredPersistenceAdapters>
                </mKahaDB>
               </persistenceAdapter>


However, now the broker restart becomes really slow because it seems to do recovery sequentially on each journal.

Is there a way to speed this up ?

Help is greatly appreciated.


--
Regards,
Oleg Dulin
http://www.olegdulin.com


Reply via email to