I didn't follow your second paragraph. The goal with the Chronicle code
should be to put the message back in memory after the Chronicle read as it
was before the Chronicle write, right? So if the message body (only) was
compressed (using the compression algorithm used for ActiveMQ messages,
which
Hi,
it seems that debian is not using the init script provided by the activemq
project.
I downloaded and extracted the current debian script of the activemq package
from https://packages.debian.org/jessie/all/activemq/download
(see attached file)
In 2009 the init script of the activemq distrib
>
>
> I don't think it's the network stack where that code works; I'm pretty sure
> the message itself does decompression when the body is accessed via the
> getter. But when you read the message body to serialize it to Chronicle,
> you're likely to invoke that decompression code and end up undoin
Your understanding of this is good and you've got the right concepts here.
The only two reasons I can think of that you shouldn't use a mesh (which I
think is the same as a complete graph, though you drew a distinction
between the two in your email so maybe there's a difference I'm not
understandin
You probably already figured this out, but that setting should be on any
machines at either end of a connection across a high-latency network link.
So definitely your brokers, but also any hosts of consumers that connect to
a broker across a high-latency link.
This setting is especially important
I would have thought that there would be keepalives sent in both directions
on any connection irrespective of whether it's used for sending or
receiving message, and that the lack of receipt of them on that connection
would have declared it dead. If the former is true, then there's a bug in
the de
Your use of an exclusive reply queue with more than 1 producer is broken.
The point of an exclusive reply queue is that your publisher knows it's the
only one publishing messages for which the replies will go on that reply
queue, because all other publishers will use different reply queues for
thei
Oh, I forgot: was there any indication of Producer Flow Control kicking in
on the broker prior to when the consumer stopped processing messages?
On Sun, Apr 26, 2015 at 9:06 AM, Tim Bain wrote:
> Did you ever solve this issue?
>
> If not, can you please clarify a couple things in your descriptio
Did you ever solve this issue?
If not, can you please clarify a couple things in your description?
1. Did you confirm (via a thread dump) that the MDB wasn't actually hung
somewhere in your consumer code? Clearly the broker believed it had
dispatched a prefetch buffer worth of messages
Mo,
Sorry for the long delay in getting back to you; maybe you've already
figured out your problem, but if not hopefully this will help.
My understanding of how RDBMSes use indices is based on Oracle, so take
this with a grain of salt since it might not apply to PostgreSQL.
As I understand it, m
You may have figured this out in the past 3 weeks since you sent your first
note, but the answer to your question is that no, it's not currently
possible to enable PFC selectively, though there are a few tweaks you can
make individually.
PFC doesn't apply to consumers, so there is nothing you can
Replying to my own post.
I found that 5.11 has a LeaseLockerIOExceptionHandler that seems to be
designed to fix this problem. The use of lease-database-locker on the
persistance adapter and the LeaseLockerIOExceptionHandler on the
brokerService fixed the problem.
--
View this message in context
12 matches
Mail list logo