Thanks so much Neha. That did the trick. Thanks so much.

On Mon, Nov 4, 2013 at 8:43 PM, Neha Narkhede <neha.narkh...@gmail.com>wrote:

> You need to set "auto.offset.reset"="smallest". By default, the consumer
> will start consuming the latest messages.
>
> Thanks,
> Neha
>
>
> On Mon, Nov 4, 2013 at 4:38 PM, Guozhang Wang <wangg...@gmail.com> wrote:
>
> > And exceptions you saw from the broker end, in server log?
> >
> > Guozhang
> >
> >
> > On Mon, Nov 4, 2013 at 4:27 PM, Vadim Keylis <vkeylis2...@gmail.com>
> > wrote:
> >
> > > Thanks for confirming, but that not behavior I observe. My consumer
> does
> > > not commit data to kafka. It get messages sent to kafka. Once
> restarted I
> > > should of gotten  messages that previously received by consumer, but on
> > > contrarily I got none. Logs confirm the initial offset been as -1. What
> > am
> > > I doing wrong?
> > >
> > > 04 Nov 2013 16:03:11,570 DEBUG meetme_Consumer_pkey_1062739249349868
> > > kafka.consumer.PartitionTopicInfo - initial consumer offset of
> meetme:0:
> > > fetched offset = -1: consumed offset = -1 is -1
> > > 04 Nov 2013 16:03:11,570 DEBUG meetme_Consumer_pkey_1062739249349868
> > > kafka.consumer.PartitionTopicInfo - initial fetch offset of meetme:0:
> > > fetched offset = -1: consumed offset = -1 is -1
> > >
> > > 04 Nov 2013 16:03:11,879 DEBUG
> > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > kafka.network.BlockingChannel - Created socket with SO_TIMEOUT = 30000
> > > (requested 30000), SO_RCVBUF = 65536 (requested 65536), SO_SNDBUF =
> 11460
> > > (requested -1).
> > > 04 Nov 2013 16:03:11,895 DEBUG
> > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > kafka.consumer.PartitionTopicInfo - reset fetch offset of ( meetme:0:
> > > fetched offset = 99000: consumed offset = -1 ) to 99000
> > > 04 Nov 2013 16:03:11,896 DEBUG
> > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > kafka.consumer.PartitionTopicInfo - reset consume offset of meetme:0:
> > > fetched offset = 99000: consumed offset = 99000 to 99000
> > > 04 Nov 2013 16:03:11,897 INFO
> > >
> > >
> >
> event1_ddatahubvadim02.tag-dev.com-1383609790143-4ed618e7-leader-finder-thread
> > > kafka.consumer.ConsumerFetcherManager -
> > > [ConsumerFetcherManager-1383609790333] Adding fetcher for partition
> > > [meetme,0], initOffset -1 to broker 9 with fetcherId 0
> > >
> > >
> > > Here is my property file:
> > >  zookeeper.connect=dzoo01.tag-dev.com:2181/kafka
> > > zookeeper.connectiontimeout.ms=1000000
> > > group.id=event1
> > > auto.commit.enable=false
> > >
> > >
> > > On Mon, Nov 4, 2013 at 3:32 PM, Guozhang Wang <wangg...@gmail.com>
> > wrote:
> > >
> > > > That is correct. If auto.commit.enable is set to faulse, the offsets
> > will
> > > > not be committed at all unless the consumer calls the commit function
> > > > explicitly.
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Mon, Nov 4, 2013 at 2:42 PM, Vadim Keylis <vkeylis2...@gmail.com>
> > > > wrote:
> > > >
> > > > > Good afternoon. I was under impression if auto commit set to false
> > >  then
> > > > > once consumer is restarted then logs would be replayed from the
> > > > beginning.
> > > > > Is that correct?
> > > > >
> > > > > Thanks,
> > > > > Vadim
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>

Reply via email to