And on your question about the 0.6GB, I have learned that the max value you set 
only applies to the heap used in Java. There are different types of memory 
allocations (particularly stuff allocated by JNI native calls within the JVM) 
that seems to escape that memory limit.  I think there is a fair chance that 
some or all of the direct memory Justin is talking about falls into that 
category.

I have regularly seen the OS report much more memory allocated than what the 
max to Java is specified as.  And of course that value is what OOM killer 
respects.

I would still be checking for a leak if you are able to run for a while before 
a crater.

Cheers!

On Jan 8, 2025, 9:23 AM, at 9:23 AM, Justin Bertram <jbert...@apache.org> wrote:
>ActiveMQ Artemis uses Netty for just about everything involving the
>network
>and Netty uses direct memory.
>
>Do you have any JVM on/off heap memory monitoring? If not I'd recommend
>enabling the JVM and Netty [1] metrics, e.g.:
>
>    <metrics>
>       <jvm-memory>true</jvm-memory>
>       <netty-pool>true</netty-pool>
>       <plugin class-name="..."/>
>    </metrics>
>
>You can, for example, scrape these with Prometheus [2], graph the
>usage,
>and then correlate that with when you hit OOME.
>
>I'm not aware of any specific memory recommendations other than to
>benchmark, monitor, and tune for your specific use-case. There are many
>different types of use-cases with widely varying memory usage patterns
>as
>well as a wealth of tuning options.
>
>
>Justin
>
>[1]
>https://activemq.apache.org/components/artemis/documentation/latest/metrics.html#optional-metrics
>[2] https://github.com/jbertram/artemis-prometheus-metrics-plugin
>
>On Wed, Jan 8, 2025 at 9:31 AM Vilius Šumskas
><vilius.sums...@rivile.lt>
>wrote:
>
>> Thank you for your reply.
>>
>> VMs do not run anything else, just standard OpenSSH server and Google
>> Cloud Monitoring Agent. I assume that these processes and the OS
>should fix
>> into remaining 1.4GB. Usually these services consume just kilobytes
>of
>> memory.
>>
>> JVM app in question is Artemis broker itself which is a standalone
>> cluster. We run our JVM apps on Kubernetes cluster and connect to
>Artemis
>> cluster externally.
>>
>> Usually of-the-shelf software running on JVM have their own
>> recommendations. For example Jenkins
>>
>https://docs.cloudbees.com/docs/cloudbees-ci/latest/jvm-troubleshooting/#_heap_size,
>> or Sonatype Nexus Repository
>> https://help.sonatype.com/en/nexus-repository-memory-overview.html .
>> Hence my question regarding how to properly calculate Artemis memory
>sizes.
>>
>> The main question is where these 0.6GB is coming from if I
>specifically
>> set Artemis heap size to 2G? Is this somekind of direct memory usage
>in the
>> broker? I don't believe that this could be just JVM overhead.
>>
>> --
>>     Vilius
>>
>> -----Original Message-----
>> From: David Bennion <dav...@gmx.com.INVALID>
>> Sent: Wednesday, January 8, 2025 4:50 PM
>> To: users@activemq.apache.org
>> Subject: Re: avoiding oom-killer on Artemis
>>
>> OOM Killer happens when Linux determines it doesn't have enough
>memory to
>> keep going, so in an act of self preservation it kills a large memory
>> consuming process rather than crash.
>>
>> So in your case we can see from the OOM Killer log that in your case
>you
>> had 2627224kB resident which I think is closer to 2.6GB.  That leaves
>1.4GB
>> for the OS and all other applications.
>>
>> The follow up question is then:  is there anything else running on
>the box
>> other than your JVM? The total memory for all apps and the OS must
>fit
>> within the 4GB max. So even small apps that use any memory can
>accumulate
>> and cause Linux to perform OOM killer.
>>
>> Are you certain there is no memory leak in your JVM app?  Does it
>take
>> some time before you get OOM killed? A memory leak would certainly
>push
>> your consumption on the OS to where garbage collection can't occur.
>> Unfortunately, Linux will simply kill the app rather than ask Java to
>> garbage collect. If there is no memory leak, I wonder if garbage
>collection
>> settings could be modified to be more aggressive?
>>
>> Can the JVM app survive with less memory and still be performant?  If
>you
>> were to set the app to use 1.5 GB or 1GB what happens?  Can you watch
>this
>> problem occur on a console? If you run a "top" on that machine sorted
>by
>> memory do you see other memory consumers?  Are those expected?
>>
>> Regards,
>> David
>>
>>
>>
>> On Jan 8, 2025, 12:31 AM, at 12:31 AM, "Vilius Šumskas" <
>> vilius.sums...@rivile.lt> wrote:
>> >Hi,
>> >
>> >I‘m trying to configure our Artemis cluster nodes in such a way that
>it
>> >won‘t get oom-killed by Linux kernel, but instead paging would
>occur.
>> >Most of the times it works, however after considerable periods of
>> >paging sometimes we get:
>> >
>> >Jan 07 19:02:11 activemq-artemis-node-1 kernel:
>>
>>oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allow
>>
>>ed=0,global_oom,task_memcg=/system.slice/activemq.service,task=java,pid
>> >=4086,uid=1001 Jan 07 19:02:11 activemq-artemis-node-1 kernel: Out
>of
>> >memory: Killed process 4086 (java) total-vm:4939656kB,
>> >anon-rss:2627224kB, file-rss:0kB, shmem-rss:0kB, UID:1001
>> >pgtables:5588kB oom_score_adj:0
>> >
>> >Full oom-killer dump https://p.defau.lt/?HayAcFR8RRDrVZZAlav4IA
>> >
>> >Artemis nodes are using standard JVM options coming from official
>> >distribution: -Xms512M -Xmx2G . They run in dedicated virtual hosts
>> >which have 4GB assigned. Nothing else runs on those boxes.
>> >global-max-size is not set so occupies 1G.
>> >
>> >Just wondering, how JVM heap sizes should be configured in relation
>to
>> >available RAM? I found just this vague description
>>
>>https://activemq.apache.org/components/artemis/documentation/latest/per
>> >f-tuning.html#tuning-the-vm but nothing concrete. There is also a
>> >question of JVM Direct Memory and other memory sectors which are not
>> >controlled by heap settings. Does Artemis use those and what is
>> >recommended ratios for them?
>> >
>> >--
>> >   Best Regards,
>> >
>> >    Vilius Šumskas
>> >    Rivile
>> >    IT manager
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org
>> For additional commands, e-mail: users-h...@activemq.apache.org
>> For further information, visit: https://activemq.apache.org/contact
>>
>>

Reply via email to