+1 to GitHub.

To be clear, an out-of-band upload is much preferred to attachments.
There's no need to distribute copies of an attached reproducer to over
1,000 individual inboxes.


Justin

On Tue, Oct 14, 2025 at 9:15 AM Timothy Bish <[email protected]> wrote:

> On 10/14/25 02:55, [email protected] wrote:
> > Hello Justin,
> >
> > I had attached the Program to the previous email, but it was stripped
> > when it got posted.
> > What would be the right place to put it?
>
> One easy way to share is to create a public Github repository with your
> reproducer and share the link here.
>
>
> >
> >
> > Regards
> >
> > Herbert
> >
> > ------------------------------------------------------------------------
> >
> > *Herbert Helmstreit*
> > Senior Software Engineer
> >
> > Phone: +49 941 / 7 83 92 36
> > [email protected]
> >
> > www.systema.com <https://www.systema.com/>
> >
> > LinkedIn <https://www.linkedin.com/company/systema-gmbh/>Facebook
> > <https://de-de.facebook.com/SYSTEMA.automation/>XING
> > <https://www.xing.com/pages/systemagmbh>
> >
> > SYSTEMA
> > Systementwicklung Dipl.-Inf. Manfred Austen GmbH
> >
> > Manfred-von-Ardenne-Ring 6 | 01099 Dresden
> > HRB 11256 Amtsgericht Dresden | USt.-ID DE 159 607 786
> > Geschäftsführer: Manfred Austen, Enno Danke, Dr. Ulf Martin, Jürg
> Matweber
> >
> > P Please check whether a printout of this e-mail is really necessary.
> >
> >
> >
> >
> >
> > Von: "Justin Bertram" <[email protected]>
> > An: [email protected]
> > Datum: 13.10.2025 20:47
> > Betreff: [Ext] Re: Re: WebConsole on broker with many queues
> > ------------------------------------------------------------------------
> >
> >
> >
> > I just set up a quick proof-of-concept using a new, default instance
> > of 2.42.0 with 18,000 multicast addresses with no queues defined in
> > broker.xml. I set up a producer and consumer to run constantly in the
> > background sending and receiving messages as fast as possible, and
> > then I opened the web console and browsed through the addresses.
> > Memory spiked up to 800MB when I first logged in to the web console,
> > but everything seemed to work fine. There was no significant
> > lag/slow-down.
> >
> > Could you perhaps upload your MultiQueue.java somewhere so I could
> > reproduce what you're seeing? Clearly my proof-of-concept didn't cover
> > your specific use-case.
> >
> >
> > Justin
> >
> > On Mon, Oct 13, 2025 at 9:04 AM <[email protected]_
> > <mailto:[email protected]>> wrote:
> > Hello Gašper,
> >
> > 1000 queues are not enough to hit the session TTL.
> > It has been reported, that Webconsole does not open, if the number of
> > objects (Queues, Topics) is too big.
> > Customer had 18000 Adress objects and opening WebConsole made all
> > Clients to connect  to another broker in the cluster.
> >
> > The program MultiQueue.java creates a given number of _queues.to_
> > <http://queues.to/>simulate it.
> >
> >
> > Start it with 3 args
> > 0: connectionurl
> > 1: Prefix e.g. MultiQ
> > 2: 18000
> > It will connect to connectionurl and try to create MultiQ_0 ...
> > MultiQ_17999
> > When MultiQ_16000 or so is created, try and open WebConsole:
> > After login it is completetly knocked-out and seems to bring the
> > broker to the limit.
> > After some time the test program terminates:
> >
> > serving on:ActiveMQQueue[MultiQ_16111]
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending blocking
> > SessionQueueQueryMessage[type=45, channelID=11, responseAsync=false,
> > requiresResponse=false, correlationID=-1, queueName=MultiQ_*16112*]
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c handling packet
> > SessionQueueQueryResponseMessage_V3[type=-14, channelID=11,
> > responseAsync=false, requiresResponse=false, correlationID=-1,
> > address=MultiQ_16112, name=MultiQ_16112, consumerCount=0,
> > filterString=null, durable=true, exists=false, temporary=false,
> > messageCount=0, autoCreationEnabled=true, autoCreated=false,
> > purgeOnNoConsumers=false, routingType=MULTICAST, maxConsumers=-1,
> > exclusive=false, groupRebalance=false,
> > groupRebalancePauseDispatch=false, groupBuckets=-1,
> > groupFirstKey=null, lastValue=false, lastValueKey=null,
> > nonDestructive=false, consumersBeforeDispatch=0,
> > delayBeforeDispatch=-1, autoDelete=false, autoDeleteDelay=0,
> > autoDeleteMessageCount=0, defaultConsumerWindowSize=1048576,
> > ringSize=-1, enabled=null, configurationManaged=false]
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending blocking
> > SessionBindingQueryMessage[type=49, channelID=11, responseAsync=false,
> > requiresResponse=false, correlationID=-1, address=MultiQ_16112]]
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c handling packet
> > SessionBindingQueryResponseMessage_V5[type=-22, channelID=11,
> > responseAsync=false, requiresResponse=false, correlationID=-1,
> > exists=false, queueNames=[], autoCreateQueues=true,
> > autoCreateAddresses=true, defaultPurgeOnNoConsumers=false,
> > defaultMaxConsumers=-1, defaultExclusive=false,
> > defaultLastValue=false, defaultLastValueKey=null,
> > defaultNonDestructive=false, defaultConsumersBeforeDispatch=0,
> > defaultDelayBeforeDispatch=-1, supportsMulticast=false,
> > supportsAnycast=false]
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending blocking
> > CreateQueueMessage_V2[type=-12, channelID=11, responseAsync=false,
> > requiresResponse=true, correlationID=-1, address=MultiQ_16112,
> > queueName=MultiQ_16112, filterString=null, durable=true,
> > temporary=false, autoCreated=true, routingType=ANYCAST,
> > maxConsumers=-1, purgeOnNoConsumers=false, exclusive=null,
> > groupRebalance=null, groupRebalancePauseDispatch=null,
> > groupBuckets=null, groupFirstKey=null, lastValue=null,
> > lastValueKey=null, nonDestructive=null, consumersBeforeDispatch=null,
> > delayBeforeDispatch=null, autoDelete=null, autoDeleteDelay=null,
> > autoDeleteMessageCount=null, ringSize=null, enabled=null]
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending packet
> > nonblocking Ping[type=10, channelID=0, responseAsync=false,
> > requiresResponse=false, correlationID=-1, connectionTTL=60000] on
> > channelID=0
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c Writing buffer for
> > channelID=0
> > 256 ||| slf4j ||| RemotingConnectionID=5b5b386c handling packet
> > Ping[type=10, channelID=0, responseAsync=false,
> > requiresResponse=false, correlationID=-1, connectionTTL=60000]
> > Exception in thread "main" javax.jms.JMSException: AMQ219014: Timed
> > out after waiting 30000 ms for response when sending packet -12
> >         at
> >
> org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:570)
> >         at
> >
> org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:464)
> >         at
> >
> org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:456)
> >         at
> >
> org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.createQueue(ActiveMQSessionContext.java:856)
> >         at
> >
> org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.internalCreateQueue(ClientSessionImpl.java:1953)
> >         at
> >
> org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.createQueue(ClientSessionImpl.java:326)
> >         at
> >
> org.apache.activemq.artemis.utils.AutoCreateUtil.autoCreateQueue(AutoCreateUtil.java:57)
> >         at
> >
> org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:917)
> >         at
> >
> org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:563)
> >         at
> >
> org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:529)
> >         at MultiQueue.subscribeToQ(MultiQueue.java:132)
> >         at MultiQueue.main(MultiQueue.java:158)
> > Caused by:
> >
> org.apache.activemq.artemis.api.core.ActiveMQConnectionTimedOutException:[errorType=CONNECTION_TIMEDOUT
>
> > message=AMQ219014: Timed out after waiting 30000 ms for response when
> > sending packet -12]
> >         ... 12 more
> >
> > The number *16112*  we reached is depending from a lot of things like
> > timing of the WebConsole opening.
> > As said before the OOM was probably a side effect caused by something
> > else.
> > The broker has 4G of memory and runs just fine in other cases.
> > Is there a nightly build of the broker, I would be curious to test it.
> >
> > Best Regards
> >
> > Herbert
> >
> > ------------------------------------------------------------------------
> >
> > *Herbert Helmstreit*
> > Senior Software Engineer
> > Phone: +49 941 / 7 83 92 36*
> > **[email protected]* <mailto:[email protected]
> >
> >
> > *www.systema.com* <https://www.systema.com/>
> >
> > SYSTEMA
> > Systementwicklung Dipl.-Inf. Manfred Austen GmbH
> > Manfred-von-Ardenne-Ring 6 | 01099 Dresden
> > HRB 11256 Amtsgericht Dresden | USt.-ID DE 159 607 786
> > Geschäftsführer: Manfred Austen, Enno Danke, Dr. Ulf Martin, Jürg
> > Matweber
> > PPlease check whether a printout of this e-mail is really necessary.
> >
> >
> >
> >
> > Von: "Gašper Čefarin" <[email protected]_
> > <mailto:[email protected]>>
> > An: "[email protected]_ <mailto:[email protected]>"
> > <[email protected]_ <mailto:[email protected]>>
> > Datum: 09.10.2025 14:19
> > Betreff: [Ext] Re: WebConsole on broker with many queues
> > ------------------------------------------------------------------------
> >
> > I cannot reproduce the issue with 1000 queues but no
> > messages/consumers/producers.
> >
> > There was a big load produced on the broker when the web console was
> > gathering into for permissions for every queue - this (and slow
> > rendering of many queues) is now fixed in the current version - which
> > is not yet released.
> > I'm not sure it could cause OOM though.
> >
> > Answers to the questions asked by Justin would help to pinpoint the
> > issue a lot.
> > I would also ask how many consumers/producers there were online.
> >
> >
> >
> >
> > ------------------------------------------------------------------------
> > *
> > From:* Alexander Milovidov <[email protected]_
> > <mailto:[email protected]>>*
> > Sent:* 08 October 2025 21:23:23*
> > To:* [email protected]_ <mailto:[email protected]>*
> > Subject:* Re: WebConsole on broker with many queues
> >
> >
> > To sporočilo izvira izven naše organizacije. Bodite pozorni pri
> > vsebini in odpiranju povezav ali prilog.
> >
> >
> >
> >
> > I also had a similar issue with performance of Artemis 2.40.0 web console
> > with about 3000 addresses/queues, but had not much time to investigate
> the
> > issue, gather thread dumps, create a reproducer etc.
> > And we still did not try to migrate any of Artemis instances to 2.40+
> > (even
> > smaller ones).
> >
> > пн, 6 окт. 2025 г. в 17:00, <[email protected]_
> > <mailto:[email protected]>>:
> >
> > > Hello Team,
> > >
> > > on an Artemis broker 2.42 with some thousands of queues and topics
> > we can
> > > reproduce the
> > > following case:
> > > open a WebConsole.
> > >
> > >    - broker blocked, browser frozen
> > >    - 100% CPU for broker process
> > >    - This situation lasts longer as the client session keep alive
> period
> > >    (30 sec). Therefore clients terminate the connections.
> > >    - additional tasks cleaning up all objects.
> > >
> > > We had a single case, where the broker completely crashed with OOM
> > in such
> > > a situation.
> > > But in most cases the broker survives with all clients gone to another
> > > broker by desaster failover.
> > > Should we avoid WebConsole at all or is there a switch to keep this
> load
> > > out of the broker?
> > >
> > > Best Regards
> > >
> > > Herbert
> > > ------------------------------
> > >
> > > *Herbert Helmstreit*
> > > Senior Software Engineer
> > >
> > > Phone: +49 941 / 7 83 92 36
> > > [email protected]_ <mailto:
> [email protected]>
> > >
> > > _www.systema.com_ <http://www.systema.com/>
> > >
> > > [image: LinkedIn] <_https://www.linkedin.com/company/systema-gmbh/_
> > <https://www.linkedin.com/company/systema-gmbh/>>[image:
> > > Facebook] <_https://de-de.facebook.com/SYSTEMA.automation/_
> > <https://de-de.facebook.com/SYSTEMA.automation/>>[image: XING]
> > > <_https://www.xing.com/pages/systemagmbh_
> > <https://www.xing.com/pages/systemagmbh>>
> > >
> > > SYSTEMA
> > > Systementwicklung Dipl.-Inf. Manfred Austen GmbH
> > >
> > > Manfred-von-Ardenne-Ring 6 | 01099 Dresden
> > > HRB 11256 Amtsgericht Dresden | USt.-ID DE 159 607 786
> > > Geschäftsführer: Manfred Austen, Enno Danke, Dr. Ulf Martin, Jürg
> > Matweber
> > >
> > > P Please check whether a printout of this e-mail is really necessary.
> > >
> > >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [email protected]_
> > <mailto:[email protected]>
> > For additional commands, e-mail: [email protected]_
> > <mailto:[email protected]>
> > For further information, visit: _https://activemq.apache.org/contact_
> > <https://activemq.apache.org/contact>
> >
> >
>
> --
> Tim Bish
>

Reply via email to