Hello,

Did you have compression turned on and batching (in terms of #.messages)?
In that case the whole compressed message set is treated as a single
message on the broker and hence could possibly exceed the limit.

In newer versions we have changed the batching criterion from #.messages to
bytes, which is aimed at resolving such issues.

Guozhang

On Thu, Mar 3, 2016 at 1:04 PM, Fang Wong <fw...@salesforce.com> wrote:

> Got the following error message with Kafka 0.8.2.1:
> [2016-02-26 20:33:43,025] INFO Closing socket connection to /x due to
> invalid request: Request of length 1937006964 is not valid, it is larger
> than the maximum size of 104857600 bytes. (kafka.network.Processor)
>
> Didn't send a large message at all, it seems like encoding issue or partial
> request, any suggestion how to fix it?
>
> The code is like below:
>
>     ByteArrayOutputStream bos = new ByteArrayOutputStream();
>
>     DataOutputStream dos = new DataOutputStream(bos);
>
>     dos.writeLong(System.currentTimeMillis());
>
>     OutputStreamWriter byteWriter = new OutputStreamWriter(bos,
> com.force.commons.text.EncodingUtil.UTF_ENCODING);
>
>     gson.toJson(obj, byteWriter);
>
>     byte[] payload = bos.toByteArray();
>
>     ProducerRecord<String, byte[]> data = new ProducerRecord<String,
> byte[]>(“Topic”, 0, null, payload);
>
>     kafkaProducer.send(data);
>



-- 
-- Guozhang

Reply via email to