Based on the information you provided I can say that the problem isn't what
I originally expected. Here's how the data breaks down per record type:

- Add (11)
  - Set scheduled delivery time (36): 376,143,458
  - Update delivery count(34): 290,102
- Add Transactional (13)
  - Add message (45):  63,893
- Update Transactional (14)
  - Add reference (32) 63,893
  - Acknowledge reference (33) 2
- Prepare (17): 228
- Commit (18): 22,941

So it looks like you're using a lot of scheduled messages, either directly
or indirectly (e.g. via redelivery delay), and the records related to the
delivery schedule are accumulating. If you stop the broker and run the
"journal compact" command does the number of records in the database drop?

Out of curiosity, is there a specific reason you're using JDBC vs. the
traditional file-based journal on local disk?


Justin

On Mon, Oct 28, 2024 at 2:56 AM Bisil <bi...@idfix.nl> wrote:

> Thanks for the reply Justin.
>
> After restoring the database in its original state I ran counts on
> recordType/userRecordType and got this:
>
> recordType userRecordType           count(*)
>
> 11         36 376143458
>
> 11         34 290102
>
> 13        45 63893
>
> 14        32 63893
>
> 18        255 22941
>
> 17        255       228
>
> 14        33        2
>
>
>
> The messages are posted through JMS from inside the same Wildfly
> instance that runs ActiveMQ. Message handling is done by MDBs that run
> inside the same JVM. We do have some clustered Wildfly setups although
> most are standalone.
>
> We know we are running an old version. Upgrading Wildfly is on our
> roadmap but that may take quite some time. We investigated upgrading
> ActiveMQ separately but ran into too many issues and gave up on that idea.
>
> Silvio
>
>
> On 23-10-2024 22:27, Justin Bertram wrote:
> > This sounds similar to an issue involving duplicate IDs proliferating in
> > the journal. I can't find the specific Jira at the moment, but the issue
> > was something like a huge build-up of duplicate ID records. Can you
> inspect
> > the "userRecordType" for the offending rows?
> >
> > Also, how are you sending your message exactly? Do you need duplicate
> > detection?
> >
> > Lastly, 2.16.0 is quite old at this point. There's been improvements to
> > JDBC since then which you'd almost certainly benefit from (not to mention
> > all the other bug-fixes and features). Are you open to upgrading?
> >
> >
> > Justin
> >
> > On Wed, Oct 23, 2024 at 4:32 AM Bisil<bi...@idfix.nl> wrote:
> >
> >> Hello,
> >>
> >> Inside Wildfly 23.0.0 we are running ActiveMQ Artemis Message Broker
> >> 2.16.0 with JDBC persistence on SQLServer for ~25 message queues. In
> >> some production environments we have moderate-to-high message volumes
> >> and since processing can be relatively slow temporary message pileup is
> >> not uncommon.
> >>
> >> In one particular environment we are experiencing OutOfMemory issues
> >> during startup. There are about 60K messages in 2 of the queues while
> >> the message table contains over 350M records causing memory exhaustion
> >> during startup. Running in a controlled environment with ~60G heap
> >> startup succeeds and through JProfiler we observe that all message table
> >> records are selected and appear to be collected in memory. After that
> >> they are processed and discarded dropping memory usage down to a
> >> fraction of its peak. Using the JBoss CLI to inspect the queues then
> >> shows we have indeed 60K messages in 2 queues.
> >>
> >> Inspecting the contents of the message table we see limited counts of
> >> record types 13 (ADD_RECORD_TX) and 14 (UPDATE_RECORD_TX) roughly
> >> equivalent to the 60K message count. All remaining records are type 11
> >> (ADD_RECORD).
> >>
> >> When we removed all type 11 records restart was fast with limited memory
> >> load and we still see 60K messages in the 2 queues.
> >>
> >> In the past we have observed similar numbers and startup problems in
> >> other environments which lead us to truncate the AMQ persistence tables
> >> to be able to restart the server without an OutOfMemoryException. But we
> >> are looking for a way to prevent this situation from happening.
> >>
> >> So my questions are:
> >>
> >> - Is the large record count in the message table expected behavior?
> >>
> >> - Is there anything we can/should do to limit the number of records in
> >> the message table?
> >>
> >> - Is removing all type 11 records a valid workaround? If no, what would
> >> be the side-effect?
> >>
> >> Thanks for your help!
> >>
> >> Silvio
> >>
>

Reply via email to