[ 
https://issues.apache.org/jira/browse/KAFKA-3892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15345103#comment-15345103
 ] 

Noah Sloan commented on KAFKA-3892:
-----------------------------------

I believe the ultimate cause is that the either a request is being made for all 
topic metadata, or a broker is mistakenly responding with all topic metadata. I 
was not able to figure out why that would happen.

I think the best immediate fix is to have the client defensively prune the 
metadata response and only retain topics that are subscribed. My PR does not 
affect the case where a topic pattern is used for subscription, so it seems 
like a fairly safe change to me.

> Clients retain metadata for non-subscribed topics
> -------------------------------------------------
>
>                 Key: KAFKA-3892
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3892
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 0.9.0.1
>            Reporter: Noah Sloan
>
> After upgrading to 0.9.0.1 from 0.8.2 (and adopting the new consumer and 
> producer classes,) we noticed services with small heap crashing due to 
> OutOfMemoryErrors. These services contained many producers and consumers (~20 
> total) and were connected to brokers with >2000 topics and over 10k 
> partitions. Heap dumps revealed that each client had 3.3MB of Metadata 
> retained in their Cluster, with references to topics that were not being 
> produced or subscribed to. While the services were running with 128MB of heap 
> prior to the upgrade, we to had increased max heap to 200MB to accommodate 
> all the extra data. 
> While this is not technically a memory leak, it does impose a significant 
> overhead on clients when connected to a large cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to