We experienced the issue on both 2.24 and 2.25. When we downgraded to 2.20 we
haven't experienced the problem anymore.
When we were producing to the address on the nodes there were no consumers on
the underlaying anycast queue. There where other producers/consumers present in
the cluster but th
Thank you for your response.
Since it seems from the previous messages on this list that slack is the most
usual way, maybe send an invite ?
I will try to get the elements, some of them I may not be able to post on a
open site/mailling list.
Thanks.
ps : i'm in the CEST timezone.
__
I'd love to get a look at a few full thread dumps during the slow-down if
possible. Could you upload those somewhere or toss them on pastebin [1] or
gist [2], etc.?
Given that there are no consumers with selectors on the queues having no
problems and that the problem comes and goes along with the
After a while of using the ActiveMQ with mKahadb it crashes with the
following errors:
2022-10-05 14:14:06,735 | WARN | Error subscribing to
/DvxSrv/12/Mine/Sampling |
org.apache.activemq.transport.mqtt.strategy.AbstractMQTTSubscriptionStrategy
| ActiveMQ Transport: tcp:///10.26.163.157:40380@188
Ah, thanks for the clarification! I will try using artmeis-service start
like you suggested. I will also talk with the sysadmim about the second
method you suggested.
To clarify, I have an existing Java application that was starting apache
ActiveMQ as an embedded process. I was asked to replace Ac
Thank you for your response.
- No selectors on other queues
- I checked the thread list on artemis console. At the time 2 or 3
threads where in BLOCKED state
for example with this stack :
1. org.apache.activemq.artemis.core.server.impl.QueueImpl . addConsumer
(QueueImpl.java:1414)
2. org.
> Is there any way to make sure a Java process runs indefinitely on a unix
(AIX) environment that I should know about?
You can use the artemis-service script which will automatically start the
broker in the background and it should run until it is shutdown via the
same script. This script is menti
Thanks for the tip! I confess it took me much longer than I hoped to get a
word with our sysadmim because they were crazy busy. they did give me some
promising information though!
The sysadmim affirmed that any process that runs in a terminal, like
Artemis for example, will be shut down when the t
The scripts you linked are for the broker's "home" directory. This is
different from the scripts used to start an actual instance of the broker
(which I previously linked). The home/instance model is described in the
documentation [1]. I just removed UseParallelGC from those scripts [2] to
(hopeful
> Is there a known issue with performance of selectors on 2.24.0 ? In
particular that may degrade over time.
I'm not aware of any particular issue, per se, but generally speaking,
using selectors on queue consumers is not good for performance. This is due
to the queue scanning required to match me
Hello,
Thank you for your response. The clients refused the downtime at the
time for the store dump and we ended up with a live cleanup of the queue
that they couldn't consume fast enough (holding ~20M response messages)
The occurrence of that specific WARNING diminished significantly after
that
what version are you using: beware of ARTEMIS-3862 Short lived
subscription makes address size inconsistent
Are you sure 2.20 would fix it? I am not aware of any difference that
would make it so.
do you have the replica taking over on your tests?
On Wed, Oct 5, 2022 at 4:31 AM Jelmer Marinus
As suggested by Mark Johnson I'm evaluating a multi-kahadb setup:
But, after that, I'm facing the following warning:
2022-10-04 18:05:23,555 | WAR
It should work as the 2.0+ specs were specifically updated to say that you
can close the consumer inside its own MessageListener and onMessage will
'complete normally', which in this case would involve acking the message
for auto-ack. That was later called out specifically:
"If the session mode is
The message-counter we are inspecting is retrieved by requesting the
"messageCount" of the queue using the "activemq.management" management queue.
When this was of we checked the Hawtio web-console and looked at the "Durable
message count" of the queue. Both counters showed the same number.
An a
15 matches
Mail list logo