Joachim,
There must have been at least one file that was kept not because of acks in
an earlier file. Presumably this would be the one with the lowest number.
Can you provide the log output for that file?
The standard guess when people say that their KahaDB files are being kept
even though there
Looking through your stack trace, I can see you're using 5.13.1, not 5.3.1,
which is a good thing. (If you were using 5.3.1, I'd tell you no one was
likely to help you with a version that old.)
It looks like that exception is probably being thrown in
org.apache.activemq.store.kahadb.disk.journal.D
As I understand it, the cursor spools out to disk messages that have been
committed into the memory store. If they're participating in a transaction,
they're not committed and are instead just sitting in memory in the
org.apache.activemq.store.memory.MemoryTransactionStore. (I assume we're
talking
I didn't follow the bit about grouping your workers. You have N ActiveMQ
consumers, and the message group will be assigned to one (and only one) of
them. Where does grouping workers play into it?
Tim
On Mar 9, 2018 3:18 PM, "vramanx" wrote:
Thanks for the clarification Tim! From the documentati
Thanks for the clarification Tim! From the documentation, I thought that the
consumers have to be tagged with static GroupID which is then later set on
the producer side.
So on the consumer side we just need to group workers as a processing unit
and the system will pick an available group to assig
I have been able to successfully have large number of persistent messages
spool over to the filesystem exactly as intended by using the
cursorMemoryHighWaterMark for messages that have been enqueued to the queue.
The issue I am running into, it happens only a few times per day in our
system, is th
Yes, the same is true in ActiveMQ 5.x.
On Mar 9, 2018 7:03 AM, "Justin Bertram" wrote:
> This is possible in ActiveMQ Artemis, and I would expect it is possible in
> the 5.x broker as well. A message broker that didn't support
> interoperability between supported protocols wouldn't be worth muc
Hi.
When I started ActiveMQ (5.3.1) , the following logs are output and can not
be started.
What kind of cause can be considered?
2018-03-09 11:43:32,468 | INFO | Refreshing
org.apache.activemq.xbean.XBeanBrokerFactory$1@2eeda025: startup date [Fri
Mar 09 11:43:32 JST 2018]; root of context hier
Hi.
When I started ActiveMQ (5.3.1) , the following logs are output and can not
be started.
What kind of cause can be considered?
2018-03-09 11:43:32,468 | INFO | Refreshing
org.apache.activemq.xbean.XBeanBrokerFactory$1@2eeda025: startup date [Fri
Mar 09 11:43:32 JST 2018]; root of context hier
This is possible in ActiveMQ Artemis, and I would expect it is possible in
the 5.x broker as well. A message broker that didn't support
interoperability between supported protocols wouldn't be worth much in my
opinion.
Justin
[1] http://activemq.apache.org/artemis/
On Fri, Mar 9, 2018 at 7:52
Hi All,
I am new to messaging protocols and IOT.
I am looking for protocol bridging APIs to bridge MQTT Topics and JMS
queues/Topics. I want to send a message from JMS producer into queue and
this message should be received by subscriber from a relevant MQTT Topic and
return an acknowledgement mess
Anton,
Unfortunately that sounds like expected behavior in that situation.
It sounds like you'd want a new config flag called something like
dispatchOnlyToHighestPriorityConsumers, which would route only to local
consumers (if at least one exists) or to only the set of consumers
connected via the
Hello,
I have some issues with scaledown of colocated servers. I have a symmetric
statically defined cluster of two colocated nodes configured with scale down.
The situation occurs thus:
1. Start both brokers. They form a connection and replicate.
2. Close server1
-> Server shuts down, server0
Hi,
I have encountered what I believe is a fringe issue with forwards within a
network of brokers.
The setup I am running features multiple components posting and reading
messages to each other, where the larger flows are connected to all brokers
at once for increased throughput, whereas the smal
Hello.
Recently we had a problem on a ActiveMQ 5.10 version (with manually applied
patch for AMQ-5542).
The mKahaDB data store increased to ~30 GB and couldn“t clean up the data
files anymore.
The log showed always something like this:
/not removing data file: 317633 as contained ack(s) refer to r
15 matches
Mail list logo