Hello,
I’m trying to compact Artemis journal with ./artemis data compact however no
matter what I do I always get this error:
[root@activemq-artemis-node-1 bin]# ./artemis data compact
2025-01-09 07:23:24,038 WARN [org.apache.activemq.artemis.logs] AMQ202017:
Algorithm two-way is deprecated an
That what I was afraid of :). OK I guess we will have to monitor it for a while
and come up with proper settings.
Prometheus plugin looks interesting. Will try it and see how it compares with
https://cloud.google.com/monitoring/agent/ops-agent/third-party/activemq
--
Vilius
-Original
And on your question about the 0.6GB, I have learned that the max value you set
only applies to the heap used in Java. There are different types of memory
allocations (particularly stuff allocated by JNI native calls within the JVM)
that seems to escape that memory limit. I think there is a fai
Also keep in mind the JVM's native memory tracking [1] which might be
useful here.
Justin
[1] https://docs.oracle.com/en/java/javase/17/vm/native-memory-tracking.html
On Wed, Jan 8, 2025 at 10:22 AM Justin Bertram wrote:
> ActiveMQ Artemis uses Netty for just about everything involving the
>
Hi,
There is a chance that root cause is elsewhere, not in your AvtiveMQ, Java heap
or kernel configuration. On a very similar VM memory / Java heap combo, I have
seen OpenJDK JVMs being oom-killed by the kernel -- the JVM was not running
ActiveMQ, though.
The problem was caused by scheduled Cl
> Am I missing something or is there no jakarta alternative for
artemis-cdi-client yet?
You're not missing anything. There is currently no Jakarta alternative for
the artemis-cdi-client module.
> If so, is there something planned in this regard...
There is nothing currently planned as far as I'm
ActiveMQ Artemis uses Netty for just about everything involving the network
and Netty uses direct memory.
Do you have any JVM on/off heap memory monitoring? If not I'd recommend
enabling the JVM and Netty [1] metrics, e.g.:
true
true
You can, for example, scrape t
Hello,
Is there a way to configure the local broker to connect to the primary remote
broker and then only attempt to connect to the backup remote broker if the
primary is unavailable and avoid the problem I described originally?
uri="static:(failover:(ssl://primary-amq-broker:61617?ver
Thank you for your reply.
VMs do not run anything else, just standard OpenSSH server and Google Cloud
Monitoring Agent. I assume that these processes and the OS should fix into
remaining 1.4GB. Usually these services consume just kilobytes of memory.
JVM app in question is Artemis broker itself
Hi!
I'm not expert about it - that said, I would recommend you to learn about
vm.overcommit_memory parameter vm.oom-kill (both on sysctl.conf)...
I've found an article that scratches the subject (the best reading is the
kernel docs of course):
https://gist.github.com/t27/ad5219a7cdb7bcb977deccbc
OOM Killer happens when Linux determines it doesn't have enough memory to keep
going, so in an act of self preservation it kills a large memory consuming
process rather than crash.
So in your case we can see from the OOM Killer log that in your case you had
2627224kB resident which I think is cl
11 matches
Mail list logo