Hi,

I’m running Spark jobs on Mesos. When the job finishes, *SparkContext* is
manually closed by sc.stop(). Then Mesos log shows:

I0809 15:48:34.132014 11020 sched.cpp:1589] Asked to stop the driver
I0809 15:48:34.132181 11277 sched.cpp:831] Stopping framework
'20160808-170425-2365980426-5050-4372-0034'

However, the process doesn’t quit after all. This is critical, because I’d
like to use SparkLauncher to submit such jobs. If my job doesn’t end, jobs
will pile up and fill up the memory. Pls help. :-|

—
BR,
Todd Leo
​

Reply via email to