HI Neha,

Thanks for the info.  I will be most interested to see what your testing
shows.


Thanks,
Ross



On 19 March 2013 17:10, Neha Narkhede <neha.narkh...@gmail.com> wrote:

> Yes, your understanding is correct. The reason we have to recompress the
> messages is to assign a unique offset to messages inside a compressed
> message. Some preliminary load testing shows 30% increase in CPU, but that
> is using GZIP which is known to be CPU intensive. By this week, we will
> know the CPU usage for a lighter compression codec like Snappy. Will post
> the results on the mailing list.
>
> Thanks,
> Neha
>
> On Monday, March 18, 2013, Ross Black wrote:
>
> > Hi,
> >
> > I have just started looking at moving from 0.7 to 0.8 and wanted to
> confirm
> > my understanding of code in the message server/broker.
> >
> > In the code for 0.8, KafkaApis.appendToLocalLog calls log.append(...,
> > assignOffsets = true), which then calls
> ByteBufferMessageSet.assignOffsets.
> > This method seems to uncompress and then re-compress the entire set of
> > messages.
> >
> > Is my understanding of the code correct?
> > Has any testing been done on the CPU consumption / performance of the
> > message server to determine whether this adversely impacts message
> > throughput under high load?
> >
> > Thanks,
> > Ross
> >
>

Reply via email to