I'm trying to compare the performance of Spark running on Mesos vs YARN.
However, I am having problems being able to configure the Spark workload to
run in a similar way on Mesos and YARN.

When running Spark on YARN, you can specify the number of executors per
node. So if I have a node with 4 CPUs, I can specify 6 executors on that
node. When running Spark on Mesos, there doesn't seem to be an equivalent
way to specify this. In Mesos, you can somewhat force this by specifying the
number of CPU resources to be 6 when running the slave daemon. However, this
seems to be a static configuration of the Mesos cluster rather something
that can be configured in the Spark framework. 

So here is my question:

For Spark on Mesos, am I correct that there is no way to control the number
of executors per node (assuming an idle cluster)? For Spark on Mesos
coarse-grained mode, there is a way to specify max_cores but that is still
not equivalent to specifying the number of executors per node as when Spark
is run on YARN.

If I am correct, then it seems Spark might be at a disadvantage running on
Mesos compared to YARN (since it lacks the fine tuning ability provided by
YARN).

Thanks,
Mike



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Controlling-number-of-executors-on-Mesos-vs-YARN-tp20966.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to