Upendra Yadav created KAFKA-4967:
------------------------------------

             Summary: java.io.EOFException Error while committing offsets
                 Key: KAFKA-4967
                 URL: https://issues.apache.org/jira/browse/KAFKA-4967
             Project: Kafka
          Issue Type: Bug
          Components: consumer
    Affects Versions: 0.10.0.1
         Environment: OS : CentOS
            Reporter: Upendra Yadav


kafka server and client : 0.10.0.1

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [********], Error while 
committing offsets.
java.io.EOFException
        at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
        at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
        at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
        at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
        at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
        at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
        at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
        at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
        at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to