that worked!   My publisher is sending 1MB payload and compressing it with
snappy.  I would have thought that with compression that it would have fit
into the 100000 bytes default of the sample code.  I guess not!

Thanks.


On Thu, Feb 27, 2014 at 1:37 AM, Jun Rao <jun...@gmail.com> wrote:

> Try making the last parameter in the following call larger (say to
> 1,000,000).
>
> .addFetch(a_topic, a_partition, readOffset, 100000)
>
> Thanks,
>
> Jun
>
>
> On Wed, Feb 26, 2014 at 9:32 PM, Dan Hoffman <hoffman...@gmail.com> wrote:
>
> > I'm not sure what you mean - could you be more specific in terms what I
> > might need to adjust in the simple consumer example code?
> >
> >
> > On Thu, Feb 27, 2014 at 12:24 AM, Jun Rao <jun...@gmail.com> wrote:
> >
> > > Are you using a fetch size larger than the whole compressed unit?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Wed, Feb 26, 2014 at 5:40 PM, Dan Hoffman <hoffman...@gmail.com>
> > wrote:
> > >
> > > > Publisher (using librdkafka C api) has sent both gzip and snappy
> > > compressed
> > > > messages.  I find that the java Simple Consumer (
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example#
> > > > )
> > > > is unable to read the snappy ones, while the High Level one is.   Is
> > this
> > > > expected?  Is there something you have to do in order to handle the
> > > snappy
> > > > messages?   There is no error messages provided, it simply acts as if
> > > there
> > > > are no further messages.
> > > >
> > >
> >
>

Reply via email to