[ https://issues.apache.org/jira/browse/KAFKA-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724852#comment-16724852 ]
Jody edited comment on KAFKA-7282 at 12/19/18 10:04 AM: -------------------------------------------------------- [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using Kafka 2.0.0, OpenShift (version 3.10) with GlusterFS as storage backend. Edit: In the mail you linked, there is an update which says that {code:java} write-behind {code} may be the critical option to turn off: [https://lists.gluster.org/pipermail/gluster-users/2017-May/031208.html] was (Author: j9dy): [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using Kafka 2.0.0, OpenShift (version 3.10) with GlusterFS as storage backend. In the mail you linked, there is an update which says that {code:java} write-behind {code} may be the critical option to turn off: [https://lists.gluster.org/pipermail/gluster-users/2017-May/031208.html] > Failed to read `log header` from file channel > --------------------------------------------- > > Key: KAFKA-7282 > URL: https://issues.apache.org/jira/browse/KAFKA-7282 > Project: Kafka > Issue Type: Bug > Components: log > Affects Versions: 0.11.0.2, 1.1.1, 2.0.0 > Environment: Linux > Reporter: Alastair Munro > Priority: Major > > Full stack trace: > {code:java} > [2018-08-13 11:22:01,635] ERROR [ReplicaManager broker=2] Error processing > fetch operation on partition segmenter-evt-v1-14, offset 96745 > (kafka.server.ReplicaManager) > org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read > `log header` from file channel `sun.nio.ch.FileChannelImpl@6e6d8ddd`. > Expected to read 17 bytes, but reached end of file after reading 0 bytes. > Started read from position 25935. > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40) > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:286) > at kafka.log.LogSegment.translateOffset(LogSegment.scala:254) > at kafka.log.LogSegment.read(LogSegment.scala:277) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114) > at kafka.log.Log.maybeHandleIOException(Log.scala:1837) > at kafka.log.Log.read(Log.scala:1114) > at > kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973) > at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802) > at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815) > at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:678) > at kafka.server.KafkaApis.handle(KafkaApis.scala:107) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)