Hi users, I've met OOME when using spark interpreter and wish to resolve this issue.
- Spark version: 1.4.1 + applying SPARK-11818 <http://issues.apache.org/jira/browse/SPARK-11818> - Spark cluster: Mesos 0.22.1 - Zeppelin: commit 1ba6e2a <https://github.com/apache/incubator-zeppelin/commit/1ba6e2a5969e475bc926943885c120f793266147> + applying ZEPPELIN-507 <https://issues.apache.org/jira/browse/ZEPPELIN-507> & ZEPPELIN-509 <https://issues.apache.org/jira/browse/ZEPPELIN-509> - loaded one fat driver jar via %dep I've run paragraph which dumps hbase table to hdfs several times, and takes memory histogram via "jmap -histo:live <pid>". Looking at histograms I can see that interpreter memory usages is increased whenever I run the paragraph. There could be spark app's memory leak, but nothing is clear so I'd like to find any other users who see the same behavior. Is anyone seeing same behavior, and could you share how to resolve? Thanks, Jungtaek Lim (HeartSaVioR)