If you’re using Mesos as your Spark cluster manager, then you can use dynamic 
resource allocation. So as your users are running notebooks the Spark executors 
will scale up and down as needed, per the thresholds you define. And when the 
user is idle, Spark will automatically release resources.

Please see the docs here, for more info:
http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
http://spark.apache.org/docs/latest/running-on-mesos.html#dynamic-resource-allocation-with-mesos


Thanks,
Silvio

From: Dylan Meissner 
<dylan.meiss...@gettyimages.com<mailto:dylan.meiss...@gettyimages.com>>
Reply-To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Date: Friday, March 4, 2016 at 11:52 AM
To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Subject: Spark interpreter idle timeout

Greetings,

We run multiple Zeppelin's per user in a Mesos cluster. The Mesos Marathon 
framework hosts the Zeppelin servers, and running a note causes a Spark 
framework to start a Spark context to distribute the workload described in the 
notes. This works well for us.

However, when notebooks are left unattended, we'd like the Spark interpreter to 
shut down. This will free resources that can go to other Mesos frameworks. Is 
there a way to set an "idle timeout" today, and if not, how do you imagine it 
could be accomplished in either Zeppelin, or Spark?

Thanks,
Dylan Meissner
www.gettyimages.com<http://www.gettyimages.com>

Reply via email to