Re: kahadb cleanup problem

2014-11-20 Thread artnaseef
I recommend first identifying any cause of slow consumption (especially DLQ - they are notorious) and taking action to eliminate the slow consumption. For example, a camel route that consumes from the DLQ and logs the messages is a good way to keep the DLQ from acting as a message store - as long

Re: kahadb cleanup problem

2014-11-19 Thread artnaseef
Start by looking for any messages sitting around, even small numbers of messages. DLQs are a prime candidate. -- View this message in context: http://activemq.2283324.n4.nabble.com/kahadb-cleanup-problem-tp4677917p4687774.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: kahadb cleanup problem

2014-02-14 Thread artnaseef
Maybe the TX part is not the real problem. Keep in mind that it only takes a single message to hold an entire 32mb kahadb file. Note 32mb is the default; the size can vary. So, check for any destinations with some number of old messages. -- View this message in context: http://activemq.2283

Re: kahadb cleanup problem

2014-02-13 Thread Gary Tully
are there any XA or distributed indoubt transaction in the mix? If so each would have it's own mbean after a restart, hanging off the broker mbean. What is odd is that you still see the problem after a restart. Are the data files something you could share. I think this needs some debugging? On 13

Re: kahadb cleanup problem

2014-02-13 Thread janhanse
I forgot to mention that, I did check KahaDBPersistenceAdapter in jmx, but the Transactions attribute is empty. -- View this message in context: http://activemq.2283324.n4.nabble.com/kahadb-cleanup-problem-tp4677917p4677934.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: kahadb cleanup problem

2014-02-13 Thread Gary Tully
can you peek at the kahadb bean via jconsole and access the transactions attribute. It should have some detail on the pending tx. On 13 February 2014 09:11, janhanse wrote: > I see several similar others have much the same problem, but I have not found > any way of fixing it so far. We are runnin