[ https://issues.apache.org/jira/browse/FLINK-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17105336#comment-17105336 ]
Monika Hristova edited comment on FLINK-17493 at 5/12/20, 11:35 AM: -------------------------------------------------------------------- [~xintongsong], we are migrating from 1.7.2 to 1.10.0, so we reproduce it with 1.10.0. Yes, we use Cassandra sinks as well. I will provide dumps soon. was (Author: monika.h): [~xintongsong], we are migrating from 1.7.2 to 1.10.0, so we reproduce it with 1.10.0. I will provide dumps soon. > Possible direct memory leak in cassandra sink > --------------------------------------------- > > Key: FLINK-17493 > URL: https://issues.apache.org/jira/browse/FLINK-17493 > Project: Flink > Issue Type: Bug > Components: Connectors / Cassandra > Affects Versions: 1.9.3, 1.10.0 > Reporter: nobleyd > Priority: Major > > # Cassandra Sink use direct memorys. > # Start a standalone cluster(1 machines) for test. > # After the cluster started, check the flink web-ui, and record the task > manager's memory info. I mean the direct memory part info. > # Start a job which read from kafka and write to cassandra using the > cassandra sink, and you can see that the direct memory count in 'Outside JVM' > part go up. > # Stop the job, and the direct memory count is not decreased(using 'jmap > -histo:live pid' to make the task manager gc). > # Repeat serveral times, the direct memory count will be more and more. -- This message was sent by Atlassian Jira (v8.3.4#803005)