Hello Daniccan.  I apologize for the dumb question, but did you also check 
“message.max.bytes” on the broker?  Default is about 1meg (1000012 bytes) for 
kafka 0.10.0.  if you need to publish larger messages, you will need to adjust 
that on the brokers and then restart them.

-David

On 10/14/16, 5:49 AM, "Daniccan VP" <danic...@iqsystech.com> wrote:

    Hi,
    
    Kindly request to help with a doubt regarding the "max.request.size" 
configuration that we use in the Kafka Producer. I get the following exceptions 
sometimes in my project.
    
    org.apache.kafka.common.errors.RecordTooLargeException: The request 
included a message larger than the max message size the server will accept.
    
    org.apache.kafka.common.errors.RecordTooLargeException: The message is 
5855801 bytes when serialized which is larger than the maximum request size you 
have configured with the max.request.size configuration.
    
    After facing the errors a couple of times, I have set the 
"max.request.size" to 5 MB now. But I still get the above errors. Does 
increasing the "max.request.size" affect the performance of the Kafka Producer 
and Consumer ? Is there any workaround for this ?
    
    Thanks and Regards,
    Daniccan VP | Junior Software Engineer
    Email : danic...@iqsystech.com
    
*******************************************************************************************************************************************************************
 This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they have been 
addressed. If you are not the intended recipient, you are notified that 
disclosing, copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system.
    

Reply via email to