Hello,
I'm trying to narrow the cause of the behavior we had on our artemis
brokers recently during a test.
We deliberately put one broker in disk quota overload by creating big
temp files on the fs of its store (the broker is configured with a 75%
limit and we had the expected log statement (the
Thank you for your response.
Since it seems from the previous messages on this list that slack is the most
usual way, maybe send an invite ?
I will try to get the elements, some of them I may not be able to post on a
open site/mailling list.
Thanks.
ps : i'm in the CEST timezone.
__
Thank you for your response.
- No selectors on other queues
- I checked the thread list on artemis console. At the time 2 or 3
threads where in BLOCKED state
for example with this stack :
1. org.apache.activemq.artemis.core.server.impl.QueueImpl . addConsumer
(QueueImpl.java:1414)
2. org.
Hello,
Thank you for your response. The clients refused the downtime at the
time for the store dump and we ended up with a live cleanup of the queue
that they couldn't consume fast enough (holding ~20M response messages)
The occurrence of that specific WARNING diminished significantly after
that
Hello,
This is related to my ticket
https://issues.apache.org/jira/browse/ARTEMIS-3992
We still have occasional spurts of messages like
2022-09-20 10:32:43,913 WARN [org.apache.activemq.artemis.journal]
AMQ142007: Can not find record 268 566 334 during compact replay
2022-09-20 10:32:43,913 WAR
By default the oom killer will always target first the biggest process.
You'll need to monitor your global memory consumption and check if it is
not being eaten by a swarm of little hungry processes.
If artemis is the only significant process sitting on your system, check
first the -Xmx value pass
Hello,
No, The failed connections are from the external clients (I do not have
the client environments, nor its code). On the embedded broker, the
server-side use vm connectors which to not seems to have such issues
(and do not use netty-ssl).
We made a deployment with a standalone artemis (2.16)
Thank you for your response.
There is still a few things that are not clear for me.
Obviously the delivery failed after the server lost the connection to
the consumer, so i don't understand how, if the redelivery count was
updated by an opaque client side delivery, the server got an up to date
co
Hello,
I have an unexpected behavior on redelivery with artemis.
The documentation states that redelivery is attempted 10 times by
default and that -1 means infinite
(https://activemq.apache.org/components/artemis/documentation/2.10.1/undelivered-messages.html)
(I checked the documentation matc
Hello,
I have a question about the effects of the the scheduled pool max size
in artemis (2.10.1).
I had a recurring issue with an embedded artemis instance which I had
significant trouble to track down.
The application stack uses a small number (3) of internal consumers do
process messages, conne
10 matches
Mail list logo