Thanks Jun for heads up!

Looked it up in wiki page:
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example#id-0.8.0SimpleConsumerExample-ReadingtheData

"Also note that we are explicitly checking that the offset being read is
not less than the offset that we requested. This is needed since if Kafka
is compressing the messages, the fetch request will return an entire
compressed block even if the requested offset isn't the beginning of the
compressed block. Thus a message we saw previously may be returned again."

Thanks once more!

Kind regards,
Stevo Slavic.

On Wed, Nov 11, 2015 at 6:12 PM, Jun Rao <j...@confluent.io> wrote:

> Are you using compressed messages? If so, when using SimpleConsumer, it's
> possible for you to see messages whose offset is smaller than the offset in
> the fetch request, if those messages are in the same compressed batch. It's
> the responsibility of the client to skip over those messages. Note that the
> high level consumer handles that logic already.
>
> Thanks,
>
> Jun
>
> On Wed, Nov 11, 2015 at 12:40 AM, Stevo Slavić <ssla...@gmail.com> wrote:
>
> > Hello Apache Kafka community,
> >
> >
> > I'm using simple consumer with Kafka 0.8.2.2 and noticed that under some
> > conditions fetch response message set for a partition can contain at
> least
> > one (if not all) MessageAndOffset with nextOffset being equal to current
> > (committed) offset, offset used in fetch request. Not sure how it's
> related
> > but I could notice this behavior especially often when I was using new
> > async producer, and when fetch request was able to fetch several messages
> > all the way to the end of the partition.
> >
> > Is this a feature or a bug?
> >
> >
> > Kind regards,
> >
> > Stevo Slavic.
> >
>

Reply via email to