If the consumer's fetch offset is not present on the Kafka server, it will send back the OffsetOutOfRange error code in the response to the consumer. Then the consumer can issue an OffsetRequest to the server to get the earliest/latest offset for the partitions. Once the consumer receives the OffsetResponse, it can reissue the FetchRequest with the earliest/latest offset.
Thanks, Neha On Fri, Mar 15, 2013 at 7:04 AM, Christopher Alexander < calexan...@gravycard.com> wrote: > I would appreciate it if someone can provide some guidance on how to > handle a consumer offset reset. I know this feature is expected to be baked > into 0.8.0 (I'm using 0.7.2). Although I in a local development > environment, such an exercise would allow me to understand Kafka better and > build a trouble-shooting solution set for an eventual production release. > > Thanks gurus. :) > > Chris > > ----- Original Message ----- > From: "Christopher Alexander" <calexan...@gravycard.com> > To: users@kafka.apache.org > Sent: Thursday, March 14, 2013 11:22:27 AM > Subject: Re: kafka.common.OffsetOutOfRangeException > > OK, I re-reviewed the Kafka design doc and looked at the topic file > mytopic-0. It definitely isn't 562949953452239 in size (just 293476). Since > I am in a local test configuration, how should I resolve the offset drift > and where: > > 1. In ZK by wiping a snapshot.XXX file? This would also affect another app > that is also using ZK. > 2. In Kafka by wiping the mytopic-0 file? > > Thanks, > > Chris > > ----- Original Message ----- > From: "Christopher Alexander" <calexan...@gravycard.com> > To: users@kafka.apache.org > Sent: Thursday, March 14, 2013 11:02:57 AM > Subject: Re: kafka.common.OffsetOutOfRangeException > > Thanks Jun, > > I don't mean to be obtuse, but could you please provide an example? Which > file should I determine size for? > > Thanks, > > Chris > > ----- Original Message ----- > From: "Jun Rao" <jun...@gmail.com> > To: users@kafka.apache.org > Sent: Thursday, March 14, 2013 12:18:31 AM > Subject: Re: kafka.common.OffsetOutOfRangeException > > Chris, > > The last offset can be calculated by adding the file size to the name of > the last Kafka segment file. Then you can see if your offset is in the > range. > > Thanks, > > Jun > > On Wed, Mar 13, 2013 at 2:53 PM, Christopher Alexander < > calexan...@gravycard.com> wrote: > > > Thanks for the reply Phillip. I am new to kafka so please bear with me if > > I say something that's "noobish". > > > > I am running in a localhost configuration for testing. If I checkout > kafka > > logs: > > > > > cd /tmp/kafka-logs > > > ls mytopic-0 # my topic is present > > > cd mytopic-0 > > > ls 00000000000000000000.kafka > > > vi 00000000000000000000.kafka > > > > This reveals a hexstring (or another format). No offset visible. > > > > If I checkout zk: > > > > > cd /tmp/zookeeper/version-2 > > > ls # I see log and snapshot files > > > vi filename > > > > This reveals binary data. No offset visible. > > > > Are these the locations for finding the current/recent offsets? Thanks. > > > > > > ----- Original Message ----- > > From: "Philip O'Toole" <phi...@loggly.com> > > To: users@kafka.apache.org > > Sent: Wednesday, March 13, 2013 5:04:01 PM > > Subject: Re: kafka.common.OffsetOutOfRangeException > > > > Is offset 562949953452239, partition 0, actually available on the > > Kafka broker? Have you checked? > > > > Philip > > > > On Wed, Mar 13, 2013 at 1:53 PM, Christopher Alexander > > <calexan...@gravycard.com> wrote: > > > Hello All, > > > > > > I am using Node-Kafka to connect to Kafka 0.7.2. Things were working > > fine but we are experiencing repeated exceptions of the following: > > > > > > [2013-03-13 16:45:14,615] ERROR error when processing request > > FetchRequest(topic:promoterregack, part:0 offset:562949953452239 > > maxSize:1048576) (kafka.server.KafkaRequestHandlers) > > > kafka.common.OffsetOutOfRangeException: offset 562949953452239 is out > of > > range > > > at kafka.log.Log$.findRange(Log.scala:46) > > > at kafka.log.Log.read(Log.scala:264) > > > at > > > kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112) > > > at > > > kafka.server.KafkaRequestHandlers.handleFetchRequest(KafkaRequestHandlers.scala:92) > > > at > > > kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39) > > > at > > > kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$2.apply(KafkaRequestHandlers.scala:39) > > > at kafka.network.Processor.handle(SocketServer.scala:296) > > > at kafka.network.Processor.read(SocketServer.scala:319) > > > at kafka.network.Processor.run(SocketServer.scala:214) > > > at java.lang.Thread.run(Thread.java:662) > > > > > > > > > Is there some configuration or manual maintenance I need to perform on > > Kafka to remediate the exception? Thank you in advance for your > assistance. > > > > > > Kind regards, > > > > > > Chris Alexander > > > Technical Architect and Engineer > > > Gravy, Inc. > > >