Keep in mind that Netty will use direct memory which is not on the heap.
Details for Netty's allocator are reported via the metrics subsystem [1].
The OOM killer might be acting based on this usage.

Based on your command-line parameters I see you're using the Prometheus
Java agent as well as the JVM-managed MBean server. You might consider
using the broker-managed MBean server (configured via management.xml) and
the Prometheus Exporter plugin for the metrics subsystem [2]. The
broker-managed MBean server supports role-based access control [3] which
means you can have stuff like read-only management users, full read/write
admins, users who can only read/write on specific resources, etc. Right now
it appears that management for brokers in your environment is completely
unsecured.


Justin

P.S. In case you haven't heard, Artemis was launched into its own top-level
Apache project [4]. Therefore, in the future you're encouraged to use the
Artemis-specific mailing lists [5].

[1]
https://artemis.apache.org/components/artemis/documentation/latest/metrics.html#netty-allocator
[2] https://github.com/rh-messaging/artemis-prometheus-metrics-plugin
[3]
https://artemis.apache.org/components/artemis/documentation/latest/management.html#role-based-authorisation-for-jmx
[4] https://activemq.apache.org/news/artemis-tlp
[5] https://artemis.apache.org/contact

On Tue, Dec 23, 2025 at 2:14 PM John Lilley <[email protected]>
wrote:

> Greetings!
>
> We are having issues with our Artemis pod in K8S, the pod memory size
> slowly grows and it is eventually OOM killed, at least our cloud ops team
> sees its pod memory growing steadily.
> Sometimes we also see that the pod is not OOM killed but kind of grinds to
> a halt, which in my experience is consistent with mostly-exhausted JVM heap.
> However... we never seem to get an OutOfMemoryError.
>
> The command-line as reported from the pod is
> /opt/java/openjdk/bin/java -XX:AutoBoxCacheMax=20000
> -XX:+PrintClassHistogram -XX:+UseG1GC -XX:+UseStringDeduplication -Xms512M
> -Xmx2G -Dhawtio.disableProxy=true -Dhawtio.realm=activemq
> -Dhawtio.offline=true
> -Dhawtio.rolePrincipalClasses=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal
> -Dhawtio.http.strictTransportSecurity=max-age=31536000;includeSubDomains;preload
> -Djolokia.policyLocation=file:/var/lib/artemis-instance/./etc/jolokia-access.xml
> -Dlog4j2.disableJmx=true --add-opens
> java.base/jdk.internal.misc=ALL-UNNAMED
> -javaagent:/var/lib/artemis-instance/lib/jmx_prometheus_javaagent.jar=8080:/var/lib/artemis-instance/etc/artemis.yml
> -Dcom.sun.management.jmxremote=true
> -Dcom.sun.management.jmxremote.port=1099
> -Dcom.sun.management.jmxremote.rmi.port=1098
> -Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.authenticate=false -Dhawtio.role=amq
> -Djava.security.auth.login.config=/var/lib/artemis-instance/etc/login.config
> -classpath /opt/activemq-artemis/lib/artemis-boot.jar
> -Dartemis.home=/opt/activemq-artemis
> -Dartemis.instance=/var/lib/artemis-instance
> -Djava.library.path=/opt/activemq-artemis/bin/lib/linux-x86_64
> -Djava.io.tmpdir=/var/lib/artemis-instance/tmp
> -Ddata.dir=/var/lib/artemis-instance/data
> -Dartemis.instance.etc=/var/lib/artemis-instance/etc
> org.apache.activemq.artemis.boot.Artemis run
>
> Seems like the heap limit of 2GB should be just fine, we are setting our
> K8S pod request/limit to 3GB.
>
> Mostly what I'm after is advice for enabling logging or diagnostics that
> would help to understand what is happening.
>
> Thanks
> john
> PLEASE NOTE: This e-mail from Redpoint Global Inc. ("Redpoint") is
> confidential and is intended solely for the use of the individual(s) to
> whom it is addressed. If you believe you received this e-mail in error,
> please notify the sender immediately, delete the e-mail from your computer
> and do not copy, print or disclose it to anyone else. If you properly
> received this e-mail as a customer, partner or vendor of Redpoint, you
> should maintain its contents in confidence subject to the terms and
> conditions of your agreement(s) with Redpoint.
>

Reply via email to