Re: ActiveMQ Classic Broker and Client Module Issue

2024-10-23 Thread Matt Pavlovich
Yes, ActiveMQ 7.x is a ways off. Having a Java module-friendly client-jar would be interesting though. In the past, we’ve added transition jars to the tree (ie. Activemq-cliekt-jakarta in 5.18.x) and we could do something similar. If you go down the route of making a shaded jar, share your shad

Re: Question regarding problems with JDBC persistence

2024-10-23 Thread Justin Bertram
This sounds similar to an issue involving duplicate IDs proliferating in the journal. I can't find the specific Jira at the moment, but the issue was something like a huge build-up of duplicate ID records. Can you inspect the "userRecordType" for the offending rows? Also, how are you sending your

Artemis MQ 2.36.0 Load Balancing questions

2024-10-23 Thread tin.m....@lmco.com
Hi, Attached is a zip file describing the configuration and the questions I have with Load Balancing. Let me know if I need to provide any additional information. Regards, Tin (Tien) Lai <> - To unsubscribe, e-mail: users-uns

RE: Re: ActiveMQ Classic Broker and Client Module Issue

2024-10-23 Thread Freeman, Christopher
It looks like the OSGi version would be compatible code wise, but it appears that it has not been packaged with a module-info unfortunately. I was attempting to avoid using a shaded jar so that there is less dependency micromanagement, but I may have to look in to that route if we wish to proceed

Question regarding problems with JDBC persistence

2024-10-23 Thread Silvio Bierman
Hello, Inside Wildfly 23.0.0 we are running ActiveMQ Artemis Message Broker 2.16.0 with JDBC persistence on SQLServer for ~25 message queues. In some production environments we have moderate-to-high message volumes and since processing can be relatively slow temporary message pileup is not un

Question regarding problems with JDBC persistence

2024-10-23 Thread Bisil
Hello, Inside Wildfly 23.0.0 we are running ActiveMQ Artemis Message Broker 2.16.0 with JDBC persistence on SQLServer for ~25 message queues. In some production environments we have moderate-to-high message volumes and since processing can be relatively slow temporary message pileup is not un