on more recent versions of Artemis, I have improved paging with JDBC.
I would suggest you keep more data on paging by setting an
address-setting#max-size-bytes. However Wildfly is a few versions
behind.

Perhaps you could use Artemis standalone and consume remote from
Wildfly. Artemis is on a faster release cadence than Wildfly. As you
can imagine Wildfly being a bigger project it takes more times for
them to consume Artemis. Especially that they have stringent
requirements on their testsuite.

Emmanuel Hugonet is doing a great job maintaining the compatibility
but it's not easy to keep Wildfly consuming the latest version of
artemis all the time.

On Thu, May 9, 2024 at 9:41 AM Justin Bertram <jbert...@apache.org> wrote:
>
> At the moment there is no broker-specific configuration that would impact
> this use-case. I recommend you simply give the JVM more memory to deal with
> the size of the JDBC ResultSet.
>
>
> Justin
>
> On Thu, May 9, 2024 at 3:55 AM Rakesh Athuru
> <rakesh.ath...@planonsoftware.com.invalid> wrote:
>
> > Hi,
> >
> > I am using JBoss Wildfly server with ActiveMQ (Artemis) configured.
> > I have a queue with persistency (to MS SQLServer DB) enabled, and when the
> > queue contains lot of messages (persisted to DB), upon restart of the
> > Wildfly server, queue loading results OutOfMemory error.
> > When I looked at heapdump, I see heap contains
> > org.apache.activemq.artemis.core.journal.RecordInfo objects and these are
> > being created from
> > "org.apache.activemq.artemis.jdbc.store.journal.JDBCJournalImpl#load(org.apache.activemq.artemis.core.journal.LoaderCallback)".
> > Also I see query used by
> > 'org.apache.activemq.artemis.jdbc.store.journal.JDBCJournalImpl' to select
> > messages, it queries complete table, this means if the table has 100000
> > records all of them are read at once and for each row RecordInfo object
> > will be created.
> >
> > Is there a configuration which influences this usecase and through which I
> > can avoid OutOfMemory error ?
> >
> > Regards,
> > Rakesh.A
> >
> >



-- 
Clebert Suconic

Reply via email to