[
https://issues.apache.org/jira/browse/KAFKA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969210#comment-15969210
]
Ismael Juma commented on KAFKA-5062:
------------------------------------
Yes, that is reasonable.
In fact, we materialize the whole RecordBatch/MessageSet. We could change it to
materialize a single record/message for message format V2 easily by using
RecordBatch.streamingIterator(), but the solution suggested by [~junrao] seems
better.
We do something similar in `FileChannelRecordBatch` for the magic and
lastOffset fields to avoid loading too much from disk. A similar approach could
be taken to avoid decompressing data unnecessarily.
> Kafka brokers can accept malformed requests which allocate gigabytes of memory
> ------------------------------------------------------------------------------
>
> Key: KAFKA-5062
> URL: https://issues.apache.org/jira/browse/KAFKA-5062
> Project: Kafka
> Issue Type: Bug
> Reporter: Apurva Mehta
>
> In some circumstances, it is possible to cause a Kafka broker to allocate
> massive amounts of memory by writing malformed bytes to the brokers port.
> In investigating an issue, we saw byte arrays on the kafka heap upto 1.8
> gigabytes, the first 360 bytes of which were non kafka requests -- an
> application was writing the wrong data to kafka, causing the broker to
> interpret the request size as 1.8GB and then allocate that amount. Apart from
> the first 360 bytes, the rest of the 1.8GB byte array was null.
> We have a socket.request.max.bytes set at 100MB to protect against this kind
> of thing, but somehow that limit is not always respected. We need to
> investigate why and fix it.
> cc [~rnpridgeon], [~ijuma], [~gwenshap], [~cmccabe]
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)