I think "SPARK_WORKER_INSTANCES" is deprecated.
This should work: "export SPARK_EXECUTOR_INSTANCES=2"
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Changing-number-of-workers-for-benchmarking-purposes-tp2606p26491.html
Sent from the Apache Spark User List
In spark release 0.7.1, I added support for running multiple worker processes
on a single slave machine. I built it for performance testing multiple
workers on a single machine in standalone mode.
Set the following in conf/spark-env.sh and bounce your cluster :
export SPARK_WORKER_INSTANCES=3
Th