That's interesting. The issue is that we somehow are missing a
response for [libomirror8,3]
specified in the request. I don't know how this is happening in the broker
though. Are you using the latest code in the 0.8 branch?

 Incomplete response (ProducerResponse(334159,Map([libomirror8,9] ->
ProducerResponseStatus(0,96820), [libomirror8,6] ->
ProducerResponseStatus(2,0), [libomirror8,0] ->
ProducerResponseStatus(0,0)))) for producer request (Name: ProducerRequest;
Version: 0; CorrelationId: 334159; ClientId: ProducerPerformance;
RequiredAcks: -1; AckTimeoutMs: 3000 ms; TopicAndPartition: [libomirror8,9]
-> 102660,[libomirror8,6] -> 102660,[libomirror8,3] ->
102660,[libomirror8,0] -> 102660)

Thanks,

Jun


On Thu, Jun 6, 2013 at 8:17 AM, Yu, Libo <libo...@citi.com> wrote:

> The publisher tried to publish 10G data (msg size 10k) to 3 brokers. This
> error
> occurred so frequently and caused big delay.
>
> [2013-06-06 11:13:29,491] INFO Shutting down producer
> (kafka.producer.Producer)
> [2013-06-06 11:13:29,492] INFO Beging shutting down ProducerSendThread
> (kafka.producer.async.ProducerSendThread)
> [2013-06-06 11:13:32,804] INFO Shutting down producer
> (kafka.producer.Producer)
> [2013-06-06 11:13:32,804] INFO Beging shutting down ProducerSendThread
> (kafka.producer.async.ProducerSendThread)
> [2013-06-06 11:13:33,716] WARN Failed to send producer request with
> correlation id 334159 to broker 1 with data for partitions
> [libomirror8,9],[libomirror8,6],[libomirror8,0],[libomirror8,3]
> (kafka.producer.async.DefaultEventHandler)
> kafka.common.KafkaException: Incomplete response
> (ProducerResponse(334159,Map([libomirror8,9] ->
> ProducerResponseStatus(0,96820), [libomirror8,6] ->
> ProducerResponseStatus(2,0), [libomirror8,0] ->
> ProducerResponseStatus(0,0)))) for producer request (Name: ProducerRequest;
> Version: 0; CorrelationId: 334159; ClientId: ProducerPerformance;
> RequiredAcks: -1; AckTimeoutMs: 3000 ms; TopicAndPartition: [libomirror8,9]
> -> 102660,[libomirror8,6] -> 102660,[libomirror8,3] ->
> 102660,[libomirror8,0] -> 102660)
>         at
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:250)
>         at
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:107)
>         at
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:101)
>         at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
>         at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:631)
>         at
> scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161)
>         at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:194)
>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:80)
>         at
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:101)
>         at
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:73)
>         at
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
>         at
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
>         at
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
>         at scala.collection.immutable.Stream.foreach(Stream.scala:254)
>         at
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
>         at
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> [2013-06-06 11:13:33,728] INFO Back off for 100 ms before retrying send.
> Remaining retries = 3 (kafka.producer.async.DefaultEventHandler)
>
>
> Regards,
>
> Libo
>
>

Reply via email to