Hi Guozhang

Didn't get any exceptions in the consumer log if the consumer doesn't
consume.

in zookeeper logs got info messages like -
[2013-11-22 11:30:04,607] INFO Got user-level KeeperException when
processing sessionid:0x1427e5fdbcc0000 type:create cxid:0x92
zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error
Path:/brokers/topics/realTimeIndexNew/partitions/0 Error:KeeperErrorCode =
NoNode for /brokers/topics/realTimeIndexNew/partitions/0
(org.apache.zookeeper.server.PrepRequestProcessor)

how ever when the consumer starts consuming i do get some leader not
available exceptions but the consumer consumes fine.

2013-11-22 11:55:44 ProducerSendThread- [WARN ] BrokerPartitionInfo - Error
while fetching metadata [{TopicMetadata for topic notification1 ->
No partition metadata for topic notification1 due to
kafka.common.LeaderNotAvailableException}] for topic [notification1]: class
kafka.common.LeaderNotAvailableException
2013-11-22 11:55:44 ProducerSendThread- [ERROR] DefaultEventHandler -
Failed to collate messages by topic, partition due to: Failed to fetch
topic metadata for topic: notification1
2013-11-22 11:55:45 ProducerSendThread- [WARN ] BrokerPartitionInfo - Error
while fetching metadata [{TopicMetadata for topic notification1 ->
No partition metadata for topic notification1 due to
kafka.common.LeaderNotAvailableException}] for topic [notification1]: class
kafka.common.LeaderNotAvailableException
2013-11-22 11:55:45 ProducerSendThread- [WARN ] BrokerPartitionInfo - Error
while fetching metadata [{TopicMetadata for topic notification1 ->
No partition metadata for topic notification1 due to
kafka.common.LeaderNotAvailableException}] for topic [notification1]: class
kafka.common.LeaderNotAvailableException





On Thu, Nov 21, 2013 at 10:02 PM, Guozhang Wang <wangg...@gmail.com> wrote:

> Hi Tarang,
>
> Could you check if there are any exceptions in the consumer logs when it
> does not consume?
>
> Guozhang
>
>
> On Thu, Nov 21, 2013 at 5:17 AM, Tarang Dawer <tarang.da...@gmail.com
> >wrote:
>
> > Hello
> >
> > I am running a kafka 0.8 consumer with the following configuration : -
> >
> > fetch.size=1000000000
> > zookeeper.session.timeout.ms=60000
> > auto.offset.reset=smallest
> > zookeeper.sync.time.ms=200
> > auto.commit.enable=false
> >
> > i am doing manual increment of the offsets.
> >
> > while doing so, i am facing a problem as when i clear all the kafka &
> > zookeeper logs, and start the consumer, the consumer sometimes shows
> > inconsistent behaviour,  sometimes it starts consuming, sometimes it
> > doesn't.
> > if it doesn't then i kill the consumer and restart it again, only then
> the
> > consumption starts.
> >
> > i am quite confused with the consumer behaviour. could somebody please
> help
> > me out as to how and where am i going wrong ?
> >
> > Thanks & Regards
> > Tarang Dawer
> >
>
>
>
> --
> -- Guozhang
>

Reply via email to