I didn't follow your second paragraph. The goal with the Chronicle code
should be to put the message back in memory after the Chronicle read as it
was before the Chronicle write, right? So if the message body (only) was
compressed (using the compression algorithm used for ActiveMQ messages,
which
>
>
> I don't think it's the network stack where that code works; I'm pretty sure
> the message itself does decompression when the body is accessed via the
> getter. But when you read the message body to serialize it to Chronicle,
> you're likely to invoke that decompression code and end up undoin
On Mon, Apr 20, 2015 at 10:16 AM, Kevin Burton wrote:
> On Mon, Apr 20, 2015 at 6:24 AM, Tim Bain wrote:
>
> > I'm confused about what would drive the need for this.
> >
> > Is it the ability to hold more messages than your JVM size allows? If
> so,
> > we already have both KahaDB and LevelDB;
On Mon, Apr 20, 2015 at 6:24 AM, Tim Bain wrote:
> I'm confused about what would drive the need for this.
>
> Is it the ability to hold more messages than your JVM size allows? If so,
> we already have both KahaDB and LevelDB; what does Chronicle offer that
> those other two don't?
>
>
The abili
in memory.
> >>
> >> First, right now, messages are stored in the same heap, and if you¹re
> >>using
> >> the memory store, like, that¹s going to add up. This will increase GC
> >> latency , and you actually need 2x more memory because you have to have
using
>> direct buffers. The downside to this is that the messages need to be
>> serialized/deserialized with each access. But realistically that¹s
>>probably
>> acceptable because you can do something like 1M message deserializations
>> per second. Which is normally
s. But realistically that’s probably
> acceptable because you can do something like 1M message deserializations
> per second. Which is normally more than the throughput of the broker.
>
> Additionally, chronicle supports zlib or snappy compression on the message
> bodies. So, while the br
you can do something like 1M message deserializations
per second. Which is normally more than the throughput of the broker.
Additionally, chronicle supports zlib or snappy compression on the message
bodies. So, while the broker supports message compression now, it doesn’t
support this feature on
AMQ 5.3.0, Tomcat 6.0.20
I have and embedded client and server broker running within a single Tomcat
instance for testing. With
message compression enabled we get the following error. Any ideas what we
can do to mitigate this, or at least debug it further?
2010-04-15 11:46:16,378 INFO
On 6/20/07, keneida <[EMAIL PROTECTED]> wrote:
what kind of compression is used when jms.useCommpresion is deifned to true.
gzip
--
James
---
http://macstrac.blogspot.com/
what kind of compression is used when jms.useCommpresion is deifned to true.
--
View this message in context:
http://www.nabble.com/message-compression-tf3952518s2354.html#a11213828
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
11 matches
Mail list logo