[
https://issues.apache.org/jira/browse/KAFKA-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789992#comment-13789992
]
Jun Rao commented on KAFKA-1077:
--------------------------------
That check is on the broker side. So if you send a request larger than 100MB,
the broker will throw an exception. In your case, I thought you get the OOME on
the consumer. So, the fetch request is not big, but fetch response is.
> OutOfMemoryError when consume large messaages
> ---------------------------------------------
>
> Key: KAFKA-1077
> URL: https://issues.apache.org/jira/browse/KAFKA-1077
> Project: Kafka
> Issue Type: Bug
> Components: config, network
> Reporter: Xiejing
> Assignee: Jun Rao
>
> We set 'socket.request.max.bytes' to 100 * 1024 * 1024, but still see
> OutOfMemoryError when consuming messages(size 1M).
> e.g.
> [08/10/13 05:44:47:047 AM EDT] 102 ERROR network.BoundedByteBufferReceive:
> OOME with size 858861616
> java.lang.OutOfMemoryError: Java heap space
> 858861616 is much larger than 100 * 1024 * 1024 but no
> InvalidRequestException is thrown in BoundedByteBufferReceive
--
This message was sent by Atlassian JIRA
(v6.1#6144)