I assume you're referring to redelivery-delay-multiplier and
max-delivery-attempts here.
Yes, that is correct.
I don't think that is relevant here.
Ok, good to know.
I had a look through the JDBC code and I see now that compaction is a red
herring. The normal file-based journal is append-only which is why it
requires occasional compaction. However, JDBC is not append-only. Records
are removed during normal processing, including the
set-scheduled-delivery-time records that are accumulating in your
database.
I tested this with both 2.16.0 and 2.38.0 (i.e. the latest) and I didn't
see any accumulation. As soon as the message associated with the
set-scheduled-delivery-time records was acknowledged all those records
were
cleaned up.
During some very small tests running integration tests that create and
process several messages at a time we did see some accumulation but with
regular intervals these records were cleaned up again. It looked like
being an asynchronous removal process but it did clean up everything
nicely with regular intervals.
In any case, I don't believe that WildFly even supports the "data
compact"
command so it would almost certainly be a head-ache to execute it.
To be clear, I typically only recommend folks use the embedded broker in
WildFly for the most trivial use-cases. As soon as complexity increases
it's usually good to have a standalone broker which can be managed (e.g.
upgraded, restarted, etc.) separately.
Until now, with the exception of these relatively rare issues, we have
not yet had many problems with this setup. But thanks for telling.
Despite the fact that compaction doesn't appear to be related here I just
wanted to say that I don't believe the broker always compacts the journal
at startup. What gave you that impression?
This came from what we saw during the startup of our system running
against the database I got these numbers from. After reading everything
in memory the created objects were released very quickly and the
database trace showed very many delete statements. Since knew our system
was not processing the messages we suspected this was part of some
cleanup process.
Ultimately I'm not sure what's going on here. I'd need to be able to
reproduce the problem myself for further investigation. Do you have a way
to reproduce this problem?
We do not, unfortunately. The production environments where these
problems occurred earlier have been running without issue and journal
size increase for weeks since. We did significantly increase the memory
size and processing power of the concerned cloud environments.
Thanks for all your support Justin. We will keep monitoring the
environments and if we have new information will return here.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org
For additional commands, e-mail: users-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact