[ https://issues.apache.org/jira/browse/KAFKA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969167#comment-15969167 ]
Jun Rao commented on KAFKA-5062: -------------------------------- Currently, we decompress messages on the broker just to verify timestamp and the relative offset. This is currently done by materializing the whole message first. We could improve it by doing the check in a streaming way during decompression w/o requiring the full message to be materialized. This reduces the memory risk on the broker. > Kafka brokers can accept malformed requests which allocate gigabytes of memory > ------------------------------------------------------------------------------ > > Key: KAFKA-5062 > URL: https://issues.apache.org/jira/browse/KAFKA-5062 > Project: Kafka > Issue Type: Bug > Reporter: Apurva Mehta > > In some circumstances, it is possible to cause a Kafka broker to allocate > massive amounts of memory by writing malformed bytes to the brokers port. > In investigating an issue, we saw byte arrays on the kafka heap upto 1.8 > gigabytes, the first 360 bytes of which were non kafka requests -- an > application was writing the wrong data to kafka, causing the broker to > interpret the request size as 1.8GB and then allocate that amount. Apart from > the first 360 bytes, the rest of the 1.8GB byte array was null. > We have a socket.request.max.bytes set at 100MB to protect against this kind > of thing, but somehow that limit is not always respected. We need to > investigate why and fix it. > cc [~rnpridgeon], [~ijuma], [~gwenshap], [~cmccabe] -- This message was sent by Atlassian JIRA (v6.3.15#6346)