The purpose of producer flow control is to prevent the broker from running
out of memory when you try to store more messages than the JVM can hold.
So your options seem to be:
1. Turn on PFC. This will prevent more messages from being published
once you hit the limits enforced by PFC.
2.
Can you submit a bug for this? Since you're running 5.14.0, there's
clearly not a known bug that you just need to upgrade to get, and I wasn't
able to find any other descriptions of behavior like this (detecting
duplicates when paging in DLQ messages), so it seems as though this is a
new problem.
Via JConsole, you can tell how many messages have been dispatched to each
consumer; do those numbers match up with whatever stats are telling you
that you didn't consume all the messages that were published?
Tim
On Tue, Aug 23, 2016 at 12:52 AM, RRK4788 wrote:
> Hi All,
>
> We are using Apache
Nothing says that the open transaction is the same transaction at time A
and time B; I've been repeatedly unable to truncate a table in Oracle (in
an application unrelated to ActiveMQ, but it illustrates the point) because
there were lots and lots of transactions per second against the table. As
s
We are using ActiveMQ, using Network of Brokers in Hub-Spoke topology.
Active MQ is deployed in Karaf Container
Software Stack Used
Active MQ Version 5.10.0
Container - Karaf 3.0.3
Camel 2.14.1
Hub is having 2 brokers- Master Active MQ and Slave Active MQ, configured
using shared file system
Hi ,we have Blackboard application which use ActiveMQ, it is causing the
database to grew huge in size , we can't truncate because there is always
active transaction. transaction Logs never get cleared out after taking
backup due to an open transaction. , s there any way to force AMQ to commit
the