Not sure what happened. It could be that the broker received messages with offset 5 to 10 at one time, but lost them later during unclean leader election. If this is the case, you will see sth like "No broker in ISR is alive for %s. Elect leader %d from live brokers %s. There's potential data loss." in the controller log. Do you see that?
Otherwise, the consumer somehow gets hold of a wrong offset. If you start the consumer in a new group, do you see the same issue? Thanks, Jun On Sat, Feb 15, 2014 at 9:06 PM, Arjun <ar...@socialtwist.com> wrote: > Hi, > > I have set up a clean kafka set up on three ec2 nodes. I have pushed in > two messages into the set up. But i invoke the high level consumer, i get > this error in the kafka broker. I ran the Consumer offset tool. It says the > lag to be -6. This set up is fresh set up and it has three kafka brokers > and three zookeepers running, with replication factor 2. > > Can some one please let me know when will this arise? and what is work > around for this? > > > > 2014-02-15 23:50:10,760] ERROR [KafkaApi-0] Error when processing fetch > request for partition [taf.referral.emails.service,1] offset 10 from > consumer with correlation id 34 (kafka.server.KafkaApis) > kafka.common.OffsetOutOfRangeException: Request for offset 10 but we only > have log segments in the range 0 to 4. > at kafka.log.Log.read(Log.scala:429) > at kafka.server.KafkaApis.kafka$server$KafkaApis$$ > readMessageSet(KafkaApis.scala:388) > at kafka.server.KafkaApis$$anonfun$kafka$server$ > KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:334) > at kafka.server.KafkaApis$$anonfun$kafka$server$ > KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:330) > at scala.collection.TraversableLike$$anonfun$map$ > 1.apply(TraversableLike.scala:206) > at scala.collection.TraversableLike$$anonfun$map$ > 1.apply(TraversableLike.scala:206) > at scala.collection.immutable.Map$Map1.foreach(Map.scala:105) > at scala.collection.TraversableLike$class.map( > TraversableLike.scala:206) > at scala.collection.immutable.Map$Map1.map(Map.scala:93) > at kafka.server.KafkaApis.kafka$server$KafkaApis$$ > readMessageSets(KafkaApis.scala:330) > at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:296) > at kafka.server.KafkaApis.handle(KafkaApis.scala:66) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42) > at java.lang.Thread.run(Thread.java:662) > ^C > ubuntu@ip-10-235-39-219:~/kafka/kafka_2.8.0-0.8.0$ ^C > ubuntu@ip-10-235-39-219:~/kafka/kafka_2.8.0-0.8.0$ bin/kafka-run-class.sh > kafka.tools.ConsumerOffsetChecker --group group1 --zkconnect > 10.235.39.219:2181 --topic taf.referral.emails.service > Group Topic Pid Offset logSize > Lag Owner > group1 taf.referral.emails.service 0 10 4 -6 > group1_ec2-54-225-44-248.compute-1.amazonaws.com-1392525076823-fb0973d5-0 > group1 taf.referral.emails.service 1 10 4 -6 > group1_ec2-54-225-44-248.compute-1.amazonaws.com-1392525076823-fb0973d5-1 > > > Thanks > Arjun Narasimha Kota >