[ 
https://issues.apache.org/jira/browse/KAFKA-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311765#comment-15311765
 ] 

Ewen Cheslack-Postava commented on KAFKA-3764:
----------------------------------------------

Given that the issue is in decompression, it is also possible the issue is due 
to the change from snappy-java 1.1.1.7 to 1.1.2.4 between the two versions. It 
may have introduced an incompatibility (either incorrectly, or revealing 
something in the library the ruby client uses). It's a bit hard to tell from 
the diff due to what appear to be simple reformatting changes, but there was 
some churn in that code.

It might be helpful to just get a full hex dump of the offending message which 
would make it pretty easy to reproduce and track down the issue.

> Error processing append operation on partition
> ----------------------------------------------
>
>                 Key: KAFKA-3764
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3764
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.10.0.0
>            Reporter: Martin Nowak
>
> After updating Kafka from 0.9.0.1 to 0.10.0.0 I'm getting plenty of `Error 
> processing append operation on partition` errors. This happens with 
> ruby-kafka as producer and enabled snappy compression.
> {noformat}
> [2016-05-27 20:00:11,074] ERROR [Replica Manager on Broker 2]: Error 
> processing append operation on partition m2m-0 (kafka.server.ReplicaManager)
> kafka.common.KafkaException: 
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
>         at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
>         at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>         at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>         at 
> kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
>         at kafka.log.Log.liftedTree1$1(Log.scala:339)
>         at kafka.log.Log.append(Log.scala:338)
>         at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>         at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>         at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>         at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>         at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>         at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>         at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>         at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
>         at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>         at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>         at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>         at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
>         at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: failed to read chunk
>         at 
> org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
>         at 
> org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
>         at java.io.DataInputStream.readFully(DataInputStream.java:195)
>         at java.io.DataInputStream.readLong(DataInputStream.java:416)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to