Your response [1] appears to have been truncated for some reason. Could you
clarify?


Justin

[1] https://lists.apache.org/thread/6vl4g24vp9cdwnjy06mvb2zskysyd6hz


On Mon, Apr 14, 2025 at 2:35 PM Alexander Milovidov <milovid...@gmail.com>
wrote:

> Thank you for your help.
>
> This virtual
>
> пн, 14 апр. 2025 г. в 20:56, Justin Bertram <jbert...@apache.org>:
>
> > Thanks for the thread dumps!
> >
> > I see that in all the "after_logon" thread dumps there is a thread like
> > this working:
> >
> > "qtp368040556-79" #79 prio=5 os_prio=0 cpu=3462.39ms elapsed=236.30s
> > tid=0x00007fd5253a96b0 nid=0x281 waiting on condition
> [0x00007fd4a68ef000]
> >    java.lang.Thread.State: WAITING (parking)
> >     at jdk.internal.misc.Unsafe.park(java.base@17.0.14/Native Method)
> >     - parking to wait for  <0x000000008bac9248> (a
> > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> >     at java.util.concurrent.locks.LockSupport.park(java.base@17.0.14
> > /LockSupport.java:341)
> >     at
> >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@17.0.14
> > /AbstractQueuedSynchronizer.java:506)
> >     at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.14
> > /ForkJoinPool.java:3465)
> >     at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.14
> > /ForkJoinPool.java:3436)
> >     at
> >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@17.0.14
> > /AbstractQueuedSynchronizer.java:1630)
> >     at
> >
> >
> org.eclipse.jetty.util.SharedBlockingCallback$Blocker.block(SharedBlockingCallback.java:214)
> >     at
> > org.eclipse.jetty.ee9.nested.HttpOutput.channelWrite(HttpOutput.java:262)
> >     at org.eclipse.jetty.ee9.nested.HttpOutput.write(HttpOutput.java:873)
> >     at sun.nio.cs.StreamEncoder.writeBytes(java.base@17.0.14
> > /StreamEncoder.java:234)
> >     at sun.nio.cs.StreamEncoder.implWrite(java.base@17.0.14
> > /StreamEncoder.java:304)
> >     at sun.nio.cs.StreamEncoder.implWrite(java.base@17.0.14
> > /StreamEncoder.java:282)
> >     at sun.nio.cs.StreamEncoder.write(java.base@17.0.14
> > /StreamEncoder.java:132)
> >     - locked <0x00000000b69ce908> (a java.io.OutputStreamWriter)
> >     at java.io.OutputStreamWriter.write(java.base@17.0.14
> > /OutputStreamWriter.java:205)
> >     at java.io.BufferedWriter.flushBuffer(java.base@17.0.14
> > /BufferedWriter.java:120)
> >     - locked <0x00000000b69ce908> (a java.io.OutputStreamWriter)
> >     at java.io.BufferedWriter.write(java.base@17.0.14
> > /BufferedWriter.java:233)
> >     - locked <0x00000000b69ce908> (a java.io.OutputStreamWriter)
> >     at java.io.Writer.write(java.base@17.0.14/Writer.java:249)
> >     at org.jolokia.json.JSONWriter.escape(JSONWriter.java:210)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:116)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:41)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:121)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:41)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:121)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:41)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:121)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:41)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:121)
> >     at org.jolokia.json.JSONWriter.serialize(JSONWriter.java:79)
> >     at org.jolokia.json.JSONArray.writeJSONString(JSONArray.java:53)
> >     at
> >
> org.jolokia.server.core.util.IoUtil.streamResponseAndClose(IoUtil.java:37)
> >     at
> >
> >
> org.jolokia.server.core.http.AgentServlet.sendStreamingResponse(AgentServlet.java:557)
> >     at
> >
> >
> org.jolokia.server.core.http.AgentServlet.sendResponse(AgentServlet.java:545)
> >     at
> > org.jolokia.server.core.http.AgentServlet.handle(AgentServlet.java:355)
> >     at
> > org.jolokia.server.core.http.AgentServlet.doPost(AgentServlet.java:294)
> >     ...
> >
> > This is a Jetty thread servicing an HTTP request from the web console via
> > Jolokia. It is serializing MBean data into JSON and sending it back to
> the
> > web console for display. It's not clear if this is CPU bound or IO bound.
> > However, given that you say the CPU was at 100% for just "2-3 seconds"
> but
> > the page still took 140 seconds to render, my guess is that this is IO
> > bound by something in your environment, probably due to the VM.
> >
> > That said, the previous console worked in essentially the same way so I'm
> > puzzled about why there's a discrepancy in the behavior of 2.39.0 and
> > 2.40.0 for this same use-case. Can you confirm that you see no (or
> > comparatively little) delay for this same use-case when using 2.39.0?
> Also,
> > could you enable the debugging console in your browser and watch the
> > network traffic when you log in to see how large the responses are from
> > Jolokia?
> >
> > In the not-to-distant future we hope to mitigate large JSON payloads for
> > all use-cases using a new MBeanInfo cache feature from Jolokia [1].
> >
> >
> > Justin
> >
> > [1]
> >
> https://jolokia.org/reference/html/manual/extensions.html#_mbeaninfo_cache
> >
> > On Mon, Apr 14, 2025 at 12:44 AM Alexander Milovidov <
> milovid...@gmail.com
> > >
> > wrote:
> >
> > > I have reproduced this issue in a fresh Artemis installation.
> > > First I created a local Artemis instance on my laptop, created 3000
> > > queues, and it performed perfectly without any issues.
> > > Then I created a virtual server in a Proxmox VE (it runs on a
> > > desktop-grade hardware). The virtual machine was configured with 1 CPU
> > and
> > > 4 Gb RAM, OS is Debian 12. Java heap settings are default (Xms512m,
> > Xmx2G).
> > > The symptoms were similar to those I have in the work environment,
> except
> > > the CPU was not loaded at 100% all the time (only for 2-3 seconds).
> > Loading
> > > of the console first page took about 140 seconds.
> > >
> > > I have captured some thread dumps: one before logon, several dumps
> during
> > > loading the first page, and one after the page was loaded.
> > > If attachments are not allowed here, I can upload to some file sharing
> > > service.
> > >
> > >
> > > пн, 7 апр. 2025 г. в 21:34, Justin Bertram <jbert...@apache.org>:
> > >
> > >> I tried to reproduce this using a fresh instance of 2.40.0 with 2,500
> > >> queues defined in broker.xml (thanks to a bash script), but the
> console
> > >> loaded in just a few seconds.
> > >>
> > >> I then created a fresh instance of 2.39.0 with 2,500 queues defined in
> > >> broker.xml. Then I ran "artemis data exp" to export the data and then
> > >> created a fresh instance of 2.40.0 and ran "artemis data imp" to
> import
> > >> those queues. After that I opened the console and it loaded in a few
> > >> seconds.
> > >>
> > >> Could you perhaps grab a few thread dumps from the broker when you see
> > it
> > >> running at 99% and share links to them?
> > >>
> > >> Also, are there further details you can share about your use-case? You
> > >> must
> > >> be doing something different in your environment to get that result
> than
> > >> what I'm doing.
> > >>
> > >>
> > >> Justin
> > >>
> > >>
> > >>
> > >> On Mon, Apr 7, 2025 at 10:24 AM Alexander Milovidov <
> > milovid...@gmail.com
> > >> >
> > >> wrote:
> > >>
> > >> > Hi All,
> > >> >
> > >> > Has anyone installed ActiveMQ Artemis 2.40.0 in an environment with
> a
> > >> > significant number of addresses and queues?
> > >> > I have several problems with performance of the new Artemis console
> > and
> > >> > message broker. I have installed Artemis 2.40.0 and imported data
> > from a
> > >> > file which was exported from the test environment with approx. 2500
> > >> queues.
> > >> >
> > >> > 1. When opening the Artemis console for the first time, it takes
> about
> > >> > 200-250 seconds to load.
> > >> > 2. When the console is loading, the server's processor is loaded at
> > 99%.
> > >> > After it loads, the CPU load is decreased to 3-5%.
> > >> >
> > >> > This virtual machine has 1 processor, 4 Gb of memory, 2 Gb java
> heap,
> > >> and
> > >> > there were no problems running Artemis 2.39.0 with the same
> > >> configuration.
> > >> > After increasing to 4 CPU, 8 Gb, the console is still loading 150
> > >> seconds,
> > >> > and 4 CPUs are loaded to 25%. Increasing the heap to 4 Gb did not
> > change
> > >> > anything.
> > >> >
> > >> > Is there any way to increase the performance of the new console?
> > >> >
> > >> > Another problem is that the bugfix ARTEMIS-5248 caused audit logs to
> > >> grow
> > >> > significantly. We are using the LDAPLogin module with Active
> Directory
> > >> > authentication and authorization based on membership of the user.
> Each
> > >> user
> > >> > in our company's active directory domain can be a member of 100-400
> > >> domain
> > >> > groups, and each group is a role. The list of roles is logged in
> each
> > >> audit
> > >> > log message, and the size of each message can be significantly large
> > >> (up to
> > >> > 10 Kb).  Unfortunately, disabling audit logging does not affect the
> > >> overall
> > >> > performance of the web console.
> > >> >
> > >> > I will later try to reproduce this issue in a fresh installation by
> > >> > creating 2500 empty queues.
> > >> >
> > >> > --
> > >> > Regards,
> > >> > Alexander
> > >> >
> > >>
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org
> > > For additional commands, e-mail: users-h...@activemq.apache.org
> > > For further information, visit: https://activemq.apache.org/contact
> > >
> >
>

Reply via email to