[ https://issues.apache.org/jira/browse/HIVE-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245042#comment-14245042 ]
Xuefu Zhang commented on HIVE-9017: ----------------------------------- Okay, I think we have difference in the term "executor". Spark can spawn one JVM at the worker node for an application. In the JVM, multiple Spark tasks (from the application of course) can be running at the same time, one by a thread. I thought executor is equal to such a thread. If the definition for executor is the JVM, then my question would be whether it's possible for Spark to spawn multiple JVMs on a single worker node for a single application. My impression (from some presentation, also haven't seen that) is "NO", but again I could be completely wrong. > Clean up temp files of RSC [Spark Branch] > ----------------------------------------- > > Key: HIVE-9017 > URL: https://issues.apache.org/jira/browse/HIVE-9017 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Rui Li > > Currently RSC will leave a lot of temp files in {{/tmp}}, including > {{*_lock}}, {{*_cache}}, {{spark-submit.*.properties}}, etc. > We should clean up these files or it will exhaust disk space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)