[ https://issues.apache.org/jira/browse/KAFKA-2045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384558#comment-14384558 ]
Jay Kreps commented on KAFKA-2045: ---------------------------------- Yeah, as you say, I think bounding memory would still be possible. Once a certain amount of memory was in use you would not begin new socket reads until more memory was available. The issue with this is just that it cuts across many layers so it may be tricky to implement. Anyhow, consider all these approaches. I think the real things we have to establish are: 1. We can actually make serious performance improvements by improving memory allocation patterns 2. We don't mangle the code to badly in doing so > Memory Management on the consumer > --------------------------------- > > Key: KAFKA-2045 > URL: https://issues.apache.org/jira/browse/KAFKA-2045 > Project: Kafka > Issue Type: Sub-task > Reporter: Guozhang Wang > > We need to add the memory management on the new consumer like we did in the > new producer. This would probably include: > 1. byte buffer re-usage for fetch response partition data. > 2. byte buffer re-usage for on-the-fly de-compression. -- This message was sent by Atlassian JIRA (v6.3.4#6332)