[ 
https://issues.apache.org/jira/browse/KAFKA-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16525481#comment-16525481
 ] 

jason wang commented on KAFKA-6177:
-----------------------------------

We have seen this in our production system as well when we mirror data from one 
cluster to another data center.  The serialization estimator is probably not 
exactly accurate.

Separately, we noticed the message.max.bytes does not support full-sized 
transfer.  Using the broker's default 1000012 message.max.bytes - we could only 
send 999940 bytes using 1.0 client and broker and 999978 bytes using 0.10 
clients and 1.0 brokers.  We have both producer and broker configured at 
1000012.

Also - this should not be classified as Minor issue - mirror maker is losing 
data....

 

> kafka-mirror-maker.sh RecordTooLargeException
> ---------------------------------------------
>
>                 Key: KAFKA-6177
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6177
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>    Affects Versions: 0.10.1.1
>         Environment: centos 7
>            Reporter: Rémi REY
>            Priority: Minor
>              Labels: support
>         Attachments: consumer.config, producer.config, server.properties
>
>
> Hi all,
> I am facing an issue with kafka-mirror-maker.sh.
> We have 2 kafka clusters with the same configuration and mirror maker 
> instances in charge of the mirroring between the clusters.
> We haven't change the default configuration on the message size, so the 
> 1000012 bytes limitation is expected on both clusters.
> we are facing the following error at the mirroring side:
> {code}
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 1000272 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> org.apache.kafka.common.errors.RecordTooLargeException: The request included 
> a message larger than the max message size the server will accept.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 13846 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> java.lang.IllegalStateException: Producer is closed forcefully.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> java.lang.Thread.run(Thread.java:745)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] 
> FATAL [mirrormaker-thread-0] Mirror maker thread failure due to  
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> java.lang.IllegalStateException: Cannot send after the producer is closed.
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.Iterator$class.foreach(Iterator.scala:893)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)
> {code}
> Why am I getting this error ? 
> {code}
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] 
> ERROR Error when sending message to topic my_topic_name with key: 81 bytes, 
> value: 1000272 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: 
> org.apache.kafka.common.errors.RecordTooLargeException: The request included 
> a message larger than the max message size the server will accept.
> {code}
> How can mirror maker encounter a 1000272 bytes message while the kafka 
> cluster being mirrored has the default limitation of 1000012 bytes for a 
> message ?
> Find the mirrormaker consumer and producer config files attached.
> Thanks for your inputs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to