On Tue, Jul 5, 2016 at 4:18 PM, Jakub Stransky <stransky...@gmail.com> wrote:

> 1) Is it possible to configure multiple executors per worker machine?

Yes.

> Do I understand it correctly that I specify SPARK_WORKER_MEMORY and
> SPARK_WORKER_CORES which essentially describes available resources to spark
> at that machine. And the number of executors actually run depends on
> spark.executor.memory setting and number of run executors is
> SPARK_WORKER_MEMORY/ spark.executor.memory

Sort of. Use the following conf/spark-env.sh to have 2 workers:

SPARK_WORKER_CORES=2
SPARK_WORKER_INSTANCES=2
SPARK_WORKER_MEMORY=2g

> 2) How do I limit resource at the application submission time?
> I can change executor-memory when submitting application but that specifies
> just size of the executor right? That actually allows dynamically change
> number of executors run on worker machine. Is there a way how to limit the
> number of executors per application or so? For example because of more
> application running on cluster..

Don't think Standalone supports it.

Jacek

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to