Hi,

I can confirm what Herbert describes here. Using console on thousands of 
addresses and queues is very slow and, given large enough quantity, is 
unusable. We did experience OOMs in previous console version (the one which 
used old Hawtio 2.x). Performance on Hawtio 4.x series are better, and (I 
think) I never saw OOM with it, but it is still very slow with thousands of 
queues.

I was monitoring https://github.com/apache/activemq-artemis-console/pull/108 
for a while, but it looks like it still didn’t make it into Artemis codebase. 
Though I’m eager to try again with latest Artemis console when it gets released.

--
    Vilius

From: [email protected] <[email protected]>
Sent: Monday, October 13, 2025 5:03 PM
To: [email protected]
Subject: Antwort: [Ext] Re: WebConsole on broker with many queues

Hello Gašper,

1000 queues are not enough to hit the session TTL.
It has been reported, that Webconsole does not open, if the number of objects 
(Queues, Topics) is too big.
Customer had 18000 Adress objects and opening WebConsole made all Clients to 
connect  to another broker in the cluster.

The program MultiQueue.java creates a given number of queues.to simulate it.


Start it with 3 args
0: connectionurl
1: Prefix e.g. MultiQ
2: 18000
It will connect to connectionurl and try to create MultiQ_0 ... MultiQ_17999
When MultiQ_16000 or so is created, try and open WebConsole:
After login it is completetly knocked-out and seems to bring the broker to the 
limit.
After some time the test program terminates:

serving on:ActiveMQQueue[MultiQ_16111]
256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending blocking 
SessionQueueQueryMessage[type=45, channelID=11, responseAsync=false, 
requiresResponse=false, correlationID=-1, queueName=MultiQ_16112]
256 ||| slf4j ||| RemotingConnectionID=5b5b386c handling packet 
SessionQueueQueryResponseMessage_V3[type=-14, channelID=11, 
responseAsync=false, requiresResponse=false, correlationID=-1, 
address=MultiQ_16112, name=MultiQ_16112, consumerCount=0, filterString=null, 
durable=true, exists=false, temporary=false, messageCount=0, 
autoCreationEnabled=true, autoCreated=false, purgeOnNoConsumers=false, 
routingType=MULTICAST, maxConsumers=-1, exclusive=false, groupRebalance=false, 
groupRebalancePauseDispatch=false, groupBuckets=-1, groupFirstKey=null, 
lastValue=false, lastValueKey=null, nonDestructive=false, 
consumersBeforeDispatch=0, delayBeforeDispatch=-1, autoDelete=false, 
autoDeleteDelay=0, autoDeleteMessageCount=0, defaultConsumerWindowSize=1048576, 
ringSize=-1, enabled=null, configurationManaged=false]
256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending blocking 
SessionBindingQueryMessage[type=49, channelID=11, responseAsync=false, 
requiresResponse=false, correlationID=-1, address=MultiQ_16112]]
256 ||| slf4j ||| RemotingConnectionID=5b5b386c handling packet 
SessionBindingQueryResponseMessage_V5[type=-22, channelID=11, 
responseAsync=false, requiresResponse=false, correlationID=-1, exists=false, 
queueNames=[], autoCreateQueues=true, autoCreateAddresses=true, 
defaultPurgeOnNoConsumers=false, defaultMaxConsumers=-1, 
defaultExclusive=false, defaultLastValue=false, defaultLastValueKey=null, 
defaultNonDestructive=false, defaultConsumersBeforeDispatch=0, 
defaultDelayBeforeDispatch=-1, supportsMulticast=false, supportsAnycast=false]
256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending blocking 
CreateQueueMessage_V2[type=-12, channelID=11, responseAsync=false, 
requiresResponse=true, correlationID=-1, address=MultiQ_16112, 
queueName=MultiQ_16112, filterString=null, durable=true, temporary=false, 
autoCreated=true, routingType=ANYCAST, maxConsumers=-1, 
purgeOnNoConsumers=false, exclusive=null, groupRebalance=null, 
groupRebalancePauseDispatch=null, groupBuckets=null, groupFirstKey=null, 
lastValue=null, lastValueKey=null, nonDestructive=null, 
consumersBeforeDispatch=null, delayBeforeDispatch=null, autoDelete=null, 
autoDeleteDelay=null, autoDeleteMessageCount=null, ringSize=null, enabled=null]
256 ||| slf4j ||| RemotingConnectionID=5b5b386c Sending packet nonblocking 
Ping[type=10, channelID=0, responseAsync=false, requiresResponse=false, 
correlationID=-1, connectionTTL=60000] on channelID=0
256 ||| slf4j ||| RemotingConnectionID=5b5b386c Writing buffer for channelID=0
256 ||| slf4j ||| RemotingConnectionID=5b5b386c handling packet Ping[type=10, 
channelID=0, responseAsync=false, requiresResponse=false, correlationID=-1, 
connectionTTL=60000]
Exception in thread "main" javax.jms.JMSException: AMQ219014: Timed out after 
waiting 30000 ms for response when sending packet -12
        at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:570)
        at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:464)
        at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:456)
        at 
org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.createQueue(ActiveMQSessionContext.java:856)
        at 
org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.internalCreateQueue(ClientSessionImpl.java:1953)
        at 
org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.createQueue(ClientSessionImpl.java:326)
        at 
org.apache.activemq.artemis.utils.AutoCreateUtil.autoCreateQueue(AutoCreateUtil.java:57)
        at 
org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:917)
        at 
org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:563)
        at 
org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:529)
        at MultiQueue.subscribeToQ(MultiQueue.java:132)
        at MultiQueue.main(MultiQueue.java:158)
Caused by: 
org.apache.activemq.artemis.api.core.ActiveMQConnectionTimedOutException: 
[errorType=CONNECTION_TIMEDOUT message=AMQ219014: Timed out after waiting 30000 
ms for response when sending packet -12]
        ... 12 more

The number 16112  we reached is depending from a lot of things like timing of 
the WebConsole opening.
As said before the OOM was probably a side effect caused by something else.
The broker has 4G of memory and runs just fine in other cases.
Is there a nightly build of the broker, I would be curious to test it.

Best Regards

Herbert
________________________________

Herbert Helmstreit
Senior Software Engineer

Phone: +49 941 / 7 83 92 36
[email protected]<mailto:[email protected]>

[cid:[email protected]]

www.systema.com<https://www.systema.com/>

[LinkedIn]<https://www.linkedin.com/company/systema-gmbh/>[Facebook]<https://de-de.facebook.com/SYSTEMA.automation/>[XING]<https://www.xing.com/pages/systemagmbh>

SYSTEMA
Systementwicklung Dipl.-Inf. Manfred Austen GmbH

Manfred-von-Ardenne-Ring 6 | 01099 Dresden
HRB 11256 Amtsgericht Dresden | USt.-ID DE 159 607 786
Geschäftsführer: Manfred Austen, Enno Danke, Dr. Ulf Martin, Jürg Matweber

P Please check whether a printout of this e-mail is really necessary.




Von:        "Gašper Čefarin" 
<[email protected]<mailto:[email protected]>>
An:        "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Datum:        09.10.2025 14:19
Betreff:        [Ext] Re: WebConsole on broker with many queues
________________________________


I cannot reproduce the issue with 1000 queues but no 
messages/consumers/producers.

There was a big load produced on the broker when the web console was gathering 
into for permissions for every queue - this (and slow rendering of many queues) 
is now fixed in the current version - which is not yet released.
I'm not sure it could cause OOM though.

Answers to the questions asked by Justin would help to pinpoint the issue a lot.
I would also ask how many consumers/producers there were online.


________________________________

From: Alexander Milovidov <[email protected]<mailto:[email protected]>>
Sent: 08 October 2025 21:23:23
To: [email protected]<mailto:[email protected]>
Subject: Re: WebConsole on broker with many queues


To sporočilo izvira izven naše organizacije. Bodite pozorni pri vsebini in 
odpiranju povezav ali prilog.




I also had a similar issue with performance of Artemis 2.40.0 web console
with about 3000 addresses/queues, but had not much time to investigate the
issue, gather thread dumps, create a reproducer etc.
And we still did not try to migrate any of Artemis instances to 2.40+ (even
smaller ones).

пн, 6 окт. 2025 г. в 17:00, 
<[email protected]<mailto:[email protected]>>:

> Hello Team,
>
> on an Artemis broker 2.42 with some thousands of queues and topics we can
> reproduce the
> following case:
> open a WebConsole.
>
>    - broker blocked, browser frozen
>    - 100% CPU for broker process
>    - This situation lasts longer as the client session keep alive period
>    (30 sec). Therefore clients terminate the connections.
>    - additional tasks cleaning up all objects.
>
> We had a single case, where the broker completely crashed with OOM in such
> a situation.
> But in most cases the broker survives with all clients gone to another
> broker by desaster failover.
> Should we avoid WebConsole at all or is there a switch to keep this load
> out of the broker?
>
> Best Regards
>
> Herbert
> ------------------------------
>
> *Herbert Helmstreit*
> Senior Software Engineer
>
> Phone: +49 941 / 7 83 92 36
> [email protected]<mailto:[email protected]>
>
> www.systema.com<http://www.systema.com/>
>
> [image: LinkedIn] <https://www.linkedin.com/company/systema-gmbh/>[image:
> Facebook] <https://de-de.facebook.com/SYSTEMA.automation/>[image: XING]
> <https://www.xing.com/pages/systemagmbh>
>
> SYSTEMA
> Systementwicklung Dipl.-Inf. Manfred Austen GmbH
>
> Manfred-von-Ardenne-Ring 6 | 01099 Dresden
> HRB 11256 Amtsgericht Dresden | USt.-ID DE 159 607 786
> Geschäftsführer: Manfred Austen, Enno Danke, Dr. Ulf Martin, Jürg Matweber
>
> P Please check whether a printout of this e-mail is really necessary.
>
>

Reply via email to