I understand, but that is not something I can implement so that would mean I
am stuck with the old broker solution for all environments running these
specific application setups.
Further testing shows that the "initialRedeliveryDelay" does not really work
by the way, messages still get stuck in th
Hi,
Basically they all currently use that adapter because they are currently
running against the ActiveMQ 5 broker. I am in the process of moving towards
the Artemis broker in all environments because of its superior performance
and some much needed additional features. As I have understood it, it
I don't know if anyone is looking into this or have any ideas, but I have
made some new discoveries that might help in figuring out what is going on.
I still have not been able to replicate the issue in a smaller/more
controlled environment, even though pretty much all is the same in regards
to br
You might have some luck trying the "keepAlive" option detailed here:
http://activemq.apache.org/tcp-transport-reference if the connection is
getting terminated by the haproxy for some reason.
Br,
Anton
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
I think you missed the attached graph, but one thing to try out might be to
disable these new monitoring features just to rule out these as the reason
for increased CPU load over time. Just disable them and take some manual
measurements over time to see if the effect is still taking place or not.
I realize this might difficult to answer as I myself am unable to reproduce
the issue in a simplified environment... but this really has me stumped and
it is currently the only thing keeping me from being able to migrate over to
the Artemis broker from ActiveMQ.
Is there anything else you can thin
I just found this old thread and a Jira describing what seems to be the same
or a very similar issue, but I have been unable to replicate the issue with
the method described there:
http://activemq.2283324.n4.nabble.com/Artemis-all-messages-go-to-quot-Delivering-quot-after-a-client-crash-td4702940.
Hmm, I might have the semantics mixed up here but... How does clients behave
in a dynamically scaling performance cluster then? Clients that join the
cluster during high load can't possibly stop working after the broker they
originally connected to scales down and stops, right?
Doesn't a "failove
Why is that, is there some design choice behind it or could this be
implemented as a feature?
If not, is there some solution to failing over to other active nodes with
core clients, or should this just be done with, for instance, an openwire
client using the failover protocol?
Br,
Anton
--
Se
Hi,
I have an issue with the Artemis broker which I am having troubles solving
and also reproduce outside of my testing environment.
The setup is the following: 3 Artemis brokers running on separate servers,
clustered in an Active-Active fashion with static connectors
The clients are running JBo
Hi,
Yes, when all messages are removed in db-1 it will get removed and all new
ones will get added to db-3. The message log is append only, so new db files
will get created and old ones removed as part of normal operations.
Beware of mixing slow and fast consumers though, as just one single messa
Hi,
That's great news! Though unfortunately I am not very good at coding and I
am not familiar with your development process. I can certainly look into a
quick fix for trying the changes out locally, but I can not provide you with
a PR right now.
I will look into learning your processes for futur
Hi,
I have set up a symmetrical Artemis cluster, running with lots of different
clients and addresses. I want the clients to be able to connect to any of
the brokers and receive messages sent to any other broker in the cluster. As
such, it is is configured with message-load-balancing "ON_DEMAND" a
Hi,
I have encountered what I believe is a fringe issue with forwards within a
network of brokers.
The setup I am running features multiple components posting and reading
messages to each other, where the larger flows are connected to all brokers
at once for increased throughput, whereas the smal
14 matches
Mail list logo