[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Dhruvakumar updated KAFKA-4430:
----------------------------------------
    Description: 
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes . 
Is the messages getting dropped and why is it logged at info. Shouldnt it 
atleast logged as warning ? 




  was:
I have a setup as below 
DC Kafka 
Mirrormaker 
Aggregate Kafka
Here is the following settings. I have set the max.message.bytes to 1M Bytes on 
DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set to 
500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
compression-> gzip . 
However on the Aggregate Kafka I get the following exception 

Closing connection due to error during produce request with correlation id 
414156659 from client id producer-1 with ack=0
Topic and partition to exceptions: [topic1,6] -> 
kafka.common.MessageSizeTooLargeException

Is this a bug or why would this happen. Noticed that this happens in the Kafka 
API class. I have configured mirrormaker to send messages less than 1 M Bytes





> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-4430
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4430
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.9.0.1
>         Environment: Production 
>            Reporter: Srinivas Dhruvakumar
>              Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. Noticed that this happens in the 
> Kafka API class. I have configured mirrormaker to send messages less than 1 M 
> Bytes . Is the messages getting dropped and why is it logged at info. 
> Shouldnt it atleast logged as warning ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to