[ https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15690723#comment-15690723 ]
Srinivas Dhruvakumar commented on KAFKA-4430: --------------------------------------------- I am a tad bit confused why would message size be bigger than 1MB on the Kafka AGG if the mirrormaker batch.size is 500 KB and message.request.size is 1MB ? Coz Max.request.size checks for the seriailized message size. and i have set the batch size taking compression into factor. In worst case if compression factor ->1 the size would still be under 1MB. > Broker logging "Topic and partition to exceptions: [topic,6] -> > kafka.common.MessageSizeTooLargeException" > ---------------------------------------------------------------------------------------------------------- > > Key: KAFKA-4430 > URL: https://issues.apache.org/jira/browse/KAFKA-4430 > Project: Kafka > Issue Type: Bug > Components: core > Affects Versions: 0.9.0.1 > Environment: Production > Reporter: Srinivas Dhruvakumar > Labels: newbie > > I have a setup as below > DC Kafka > Mirrormaker > Aggregate Kafka > Here is the following settings. I have set the max.message.bytes to 1M Bytes > on DC and AGG kafka side. Mirrormaker producer settings -- batch.size is set > to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , > compression-> gzip . > However on the Aggregate Kafka I get the following exception > Closing connection due to error during produce request with correlation id > 414156659 from client id producer-1 with ack=0 > Topic and partition to exceptions: [topic1,6] -> > kafka.common.MessageSizeTooLargeException > Is this a bug or why would this happen. I have configured mirrormaker to send > messages less than 1 M Bytes . Are the messages getting dropped ? Under what > circumstances this error occurs ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)