ry mapped files.
Not sure how it applies to this case.
Regards,
Jan Kotek
On Friday 02 August 2013 22:19:34 Jay Kreps wrote:
> Chris commented in another thread about the poor compression performance in
> 0.8, even with snappy.
>
> Indeed if I run the linear log write throughput te
on that offsets increments continuously. Two
> > >workarounds of this issue:
> > >
> > >1) In log compaction, instead of deleting the to-be-deleted-message just
> > >setting its payload to null but keep its header and hence keeping its
> slot
> > >in the
gt;the logical offset of messages, write the deltas of their offset compared
> >with the offset of the wrapper message. So -1 would mean continuously
> >decrementing from the wrapper message offset, and -2/3/... would be
> >skipping holes in side the compressed message.
>
er message offset, and -2/3/... would be
>skipping holes in side the compressed message.
>
>
>On Fri, Aug 2, 2013 at 10:19 PM, Jay Kreps wrote:
>
>> Chris commented in another thread about the poor compression performance
>> in 0.8, even with snappy.
>>
>> Inde
the wrapper message offset, and -2/3/... would be
skipping holes in side the compressed message.
On Fri, Aug 2, 2013 at 10:19 PM, Jay Kreps wrote:
> Chris commented in another thread about the poor compression performance
> in 0.8, even with snappy.
>
> Indeed if I run the line
Chris commented in another thread about the poor compression performance in
0.8, even with snappy.
Indeed if I run the linear log write throughput test on my laptop I see
75MB/sec with no compression and 17MB/sec with snappy.
This is a little surprising as snappy claims 200MB round-trip