Justin, Thank you!
Does the direct memory used by netty also apply to the clients? We've recently seen OOM-kill in one of our client apps as well. Can you offer any guidance about direct-memory use? Like, do we need to limit it, or add something to our pod request/limit to allow for what is going to be used? We've read that -XX:MaxDirectMemorySize controls this, but I'm unsure if it applies to netty? Finally, I was apprised of one other oddity — when our AMQ pod was OOMKilled, the replacement pod immediately started at 2GB (total) memory use and climbed to 3GB within 12 hours. Is this normal? Our use is fairly light, like maybe 10 messages/second. Thanks john ________________________________ From: Justin Bertram <[email protected]> Sent: Monday, January 5, 2026 10:44 AM To: [email protected] <[email protected]> Subject: Re: diagnosing memory issues *** [Caution] This email is from an external source. Please use caution responding, opening attachments or clicking embedded links. *** Keep in mind that Netty will use direct memory which is not on the heap. Details for Netty's allocator are reported via the metrics subsystem [1]. The OOM killer might be acting based on this usage. Based on your command-line parameters I see you're using the Prometheus Java agent as well as the JVM-managed MBean server. You might consider using the broker-managed MBean server (configured via management.xml) and the Prometheus Exporter plugin for the metrics subsystem [2]. The broker-managed MBean server supports role-based access control [3] which means you can have stuff like read-only management users, full read/write admins, users who can only read/write on specific resources, etc. Right now it appears that management for brokers in your environment is completely unsecured. Justin P.S. In case you haven't heard, Artemis was launched into its own top-level Apache project [4]. Therefore, in the future you're encouraged to use the Artemis-specific mailing lists [5]. [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__artemis.apache.org_components_artemis_documentation_latest_metrics.html-23netty-2Dallocator&d=DwIFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=VpU5fPwEQ8wKbKY5SBkV1DYypS_WKWolD9Eue6u6sYo&m=9_NWihKqGvTy6kCkKISFz8OIFJ6QXS7WYmsH4J1gdfUL5ipaH-uH9ofGwzP3oFem&s=Rgj1FEORekZUDTdD76vvumRlApawwbkDoXiUTfa18wM&e= [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_rh-2Dmessaging_artemis-2Dprometheus-2Dmetrics-2Dplugin&d=DwIFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=VpU5fPwEQ8wKbKY5SBkV1DYypS_WKWolD9Eue6u6sYo&m=9_NWihKqGvTy6kCkKISFz8OIFJ6QXS7WYmsH4J1gdfUL5ipaH-uH9ofGwzP3oFem&s=QEvCsayv-ke6EwMQtXSPgle6nxsDXsvtgOAJ4Rq4g98&e= [3] https://urldefense.proofpoint.com/v2/url?u=https-3A__artemis.apache.org_components_artemis_documentation_latest_management.html-23role-2Dbased-2Dauthorisation-2Dfor-2Djmx&d=DwIFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=VpU5fPwEQ8wKbKY5SBkV1DYypS_WKWolD9Eue6u6sYo&m=9_NWihKqGvTy6kCkKISFz8OIFJ6QXS7WYmsH4J1gdfUL5ipaH-uH9ofGwzP3oFem&s=sOPMZYDazwzc4rToYupJQ6DEgL5SQ97dnNz9YBmy6uw&e= [4] https://urldefense.proofpoint.com/v2/url?u=https-3A__activemq.apache.org_news_artemis-2Dtlp&d=DwIFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=VpU5fPwEQ8wKbKY5SBkV1DYypS_WKWolD9Eue6u6sYo&m=9_NWihKqGvTy6kCkKISFz8OIFJ6QXS7WYmsH4J1gdfUL5ipaH-uH9ofGwzP3oFem&s=V2toSSfgdM1IXVOIpreLoboxhEhrovDK--X7bps0pHQ&e= [5] https://urldefense.proofpoint.com/v2/url?u=https-3A__artemis.apache.org_contact&d=DwIFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=VpU5fPwEQ8wKbKY5SBkV1DYypS_WKWolD9Eue6u6sYo&m=9_NWihKqGvTy6kCkKISFz8OIFJ6QXS7WYmsH4J1gdfUL5ipaH-uH9ofGwzP3oFem&s=AzwDQWdx3rN1KSKv6SHW9DOF-aOz445xjz-F8G0LlSQ&e= On Tue, Dec 23, 2025 at 2:14 PM John Lilley <https://urldefense.proofpoint.com/v2/url?u=http-3A__john.lilley-40redpointglobal.com&d=DwIFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=VpU5fPwEQ8wKbKY5SBkV1DYypS_WKWolD9Eue6u6sYo&m=9_NWihKqGvTy6kCkKISFz8OIFJ6QXS7WYmsH4J1gdfUL5ipaH-uH9ofGwzP3oFem&s=ET0V69KXTA0jxwamtJ58eMsh9PtRtxdFq9mgJkUW_58&e=> wrote: > Greetings! > > We are having issues with our Artemis pod in K8S, the pod memory size > slowly grows and it is eventually OOM killed, at least our cloud ops team > sees its pod memory growing steadily. > Sometimes we also see that the pod is not OOM killed but kind of grinds to > a halt, which in my experience is consistent with mostly-exhausted JVM heap. > However... we never seem to get an OutOfMemoryError. > > The command-line as reported from the pod is > /opt/java/openjdk/bin/java -XX:AutoBoxCacheMax=20000 > -XX:+PrintClassHistogram -XX:+UseG1GC -XX:+UseStringDeduplication -Xms512M > -Xmx2G -Dhawtio.disableProxy=true -Dhawtio.realm=activemq > -Dhawtio.offline=true > -Dhawtio.rolePrincipalClasses=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal > -Dhawtio.http.strictTransportSecurity=max-age=31536000;includeSubDomains;preload > -Djolokia.policyLocation=file:/var/lib/artemis-instance/./etc/jolokia-access.xml > -Dlog4j2.disableJmx=true --add-opens > java.base/jdk.internal.misc=ALL-UNNAMED > -javaagent:/var/lib/artemis-instance/lib/jmx_prometheus_javaagent.jar=8080:/var/lib/artemis-instance/etc/artemis.yml > -Dcom.sun.management.jmxremote=true > -Dcom.sun.management.jmxremote.port=1099 > -Dcom.sun.management.jmxremote.rmi.port=1098 > -Dcom.sun.management.jmxremote.ssl=false > -Dcom.sun.management.jmxremote.authenticate=false -Dhawtio.role=amq > -Djava.security.auth.login.config=/var/lib/artemis-instance/etc/login.config > -classpath /opt/activemq-artemis/lib/artemis-boot.jar > -Dartemis.home=/opt/activemq-artemis > -Dartemis.instance=/var/lib/artemis-instance > -Djava.library.path=/opt/activemq-artemis/bin/lib/linux-x86_64 > -Djava.io.tmpdir=/var/lib/artemis-instance/tmp > -Ddata.dir=/var/lib/artemis-instance/data > -Dartemis.instance.etc=/var/lib/artemis-instance/etc > org.apache.activemq.artemis.boot.Artemis run > > Seems like the heap limit of 2GB should be just fine, we are setting our > K8S pod request/limit to 3GB. > > Mostly what I'm after is advice for enabling logging or diagnostics that > would help to understand what is happening. > > Thanks > john > PLEASE NOTE: This e-mail from Redpoint Global Inc. ("Redpoint") is > confidential and is intended solely for the use of the individual(s) to > whom it is addressed. If you believe you received this e-mail in error, > please notify the sender immediately, delete the e-mail from your computer > and do not copy, print or disclose it to anyone else. If you properly > received this e-mail as a customer, partner or vendor of Redpoint, you > should maintain its contents in confidence subject to the terms and > conditions of your agreement(s) with Redpoint. >
