I am having a spark cluster having some high performance nodes and others are
having commodity specs (lower configuration). 
When I configure worker memory and instances in spark-env.sh, it reflects to
all the nodes.
Can I change SPARK_WORKER_MEMORY and SPARK_WORKER_INSTANCES properties per
node/machine basis ?
I am using Spark 1.1.0 version.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Change-number-of-workers-and-memory-tp14866.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to