Thank you Clebert. It would be very interesting to know more about this.
As far as I know infinite redelivery is what you get with a max
redelivery count of -1 which we do not use. And we have DQL configured
but nonetheless I would love to investigate this.
Silvio
On 31-10-2024 10:59, Clebert Suconic wrote:
I remember an old issue where rescheduling deliveries were updated over and
over.
As of the latest version as far as I remember I only update it once.
If you have an infinite redelivery without DLQ you might get into that
situation.
It would be difficult for me to find the exact JIRA now. But I will try to
look for it and will post it here if I find it.
Clebert Suconic
On Wed, Oct 23, 2024 at 4:28 PM Justin Bertram <jbert...@apache.org> wrote:
This sounds similar to an issue involving duplicate IDs proliferating in
the journal. I can't find the specific Jira at the moment, but the issue
was something like a huge build-up of duplicate ID records. Can you inspect
the "userRecordType" for the offending rows?
Also, how are you sending your message exactly? Do you need duplicate
detection?
Lastly, 2.16.0 is quite old at this point. There's been improvements to
JDBC since then which you'd almost certainly benefit from (not to mention
all the other bug-fixes and features). Are you open to upgrading?
Justin
On Wed, Oct 23, 2024 at 4:32 AM Bisil <bi...@idfix.nl> wrote:
Hello,
Inside Wildfly 23.0.0 we are running ActiveMQ Artemis Message Broker
2.16.0 with JDBC persistence on SQLServer for ~25 message queues. In
some production environments we have moderate-to-high message volumes
and since processing can be relatively slow temporary message pileup is
not uncommon.
In one particular environment we are experiencing OutOfMemory issues
during startup. There are about 60K messages in 2 of the queues while
the message table contains over 350M records causing memory exhaustion
during startup. Running in a controlled environment with ~60G heap
startup succeeds and through JProfiler we observe that all message table
records are selected and appear to be collected in memory. After that
they are processed and discarded dropping memory usage down to a
fraction of its peak. Using the JBoss CLI to inspect the queues then
shows we have indeed 60K messages in 2 queues.
Inspecting the contents of the message table we see limited counts of
record types 13 (ADD_RECORD_TX) and 14 (UPDATE_RECORD_TX) roughly
equivalent to the 60K message count. All remaining records are type 11
(ADD_RECORD).
When we removed all type 11 records restart was fast with limited memory
load and we still see 60K messages in the 2 queues.
In the past we have observed similar numbers and startup problems in
other environments which lead us to truncate the AMQ persistence tables
to be able to restart the server without an OutOfMemoryException. But we
are looking for a way to prevent this situation from happening.
So my questions are:
- Is the large record count in the message table expected behavior?
- Is there anything we can/should do to limit the number of records in
the message table?
- Is removing all type 11 records a valid workaround? If no, what would
be the side-effect?
Thanks for your help!
Silvio
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org
For additional commands, e-mail: users-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact