After some more digging it seem to be the failover transport that might be
the culprit. When posting a msg directly to one node
tcp://server_1:61616
the filter is applied correctly. But using the failover transport when
posting the msg
failover:(tcp://server_1:61616,tcp://server_2:61616)
Thanks for the replies.
Below are my comments on the discussed topics
> Have you considered using an actual standalone caching product such as
> Redis or MemCache as your cache rather than trying to create your own
> synchronized distributed in-memory cache?
That was the first thought but sinc
Thanks for submitting the bug. Which versions before 5.15.3 did you test,
so we can update the Affects Version to reflect your findings?
Also, did you downgrade the broker as well, or just the client (leaving the
broker at 5.15.3)? What you wrote sounds like you only changed the client
but not the
A quick update for anyone having the same issue.
The cause was a bug in the activemq client that exists since version 5.14,
see https://issues.apache.org/jira/browse/AMQ-6949
I managed to get the HTTP connection working by downgrading to the 5.13.4
client
--
Sent from: http://activemq.2283324
I'm running ActiveMQ 5.15.0.
My ActiveMQ.DLQ is filling up with messages with dlqDeliveryFailureCause
like following:
java.lang.Throwable: TopicSubDiscard.
ID:ID:bac63a56-37427-1523628495562-1:44:1:1
java.lang.Throwable: Suppressing duplicate delivery on connection, consumer
ID:82e8ed600671-3
Hi,
We use Artemis 1.1 with clients on WildFly 10.1.0 final instances. We would
like to provide more robust and high available environments. I tested 1
artemis broker with 2 WF servers that listen same queue by consumers. When
one of WF instaces is under load (throws runtimexceptions on onMessage
p
Dear Tim,
Thank you for your reply.
My message indeed seems to have been corrupted somehow.
In the meantime I managed to narrow the problem down quite a bit and I can
provide more information.
For my setup I am running the ActiveMQ Message Broker 5.15.3 locally on my
Windows system. I took the