You can consider increase `max.request.size` ​a little big (the default
value is `1048576`.), after checking Kafka client source code, they count
[`key size` + `value size` + `header size` + others] together, so it's
possible the calculated size is a little bigger than the default value.

please check with https://kafka.apache.org/documentation/#configuration.

On Mon, Jul 2, 2018 at 5:08 PM, <jerryrichard...@tutanota.com> wrote:

> Hi all,
>
> I get this error even when my records are smaller than the 1000012 byte
> limit:
>
> org.apache.kafka.common.errors.RecordTooLargeException: The request
> included a message larger than the max message size the server will accept.
>
> How do I ensure that my producer doesn't send records that are too large?
>
> Thanks in advance for any suggestions and help.
>

Reply via email to