Yes i think it is ONE worker ONE executor as executor is nothing but jvm instance spawned by the worker
To run more executors ie jvm instances on the same physical cluster node you need to run more than one worker on that node and then allocate only part of the sys resourced to that worker/executot Sent from Samsung Mobile <div>-------- Original message --------</div><div>From: maxdml <max...@cs.duke.edu> </div><div>Date:2015/06/10 19:56 (GMT+00:00) </div><div>To: user@spark.apache.org </div><div>Subject: Re: Determining number of executors within RDD </div><div> </div>Actually this is somehow confusing for two reasons: - First, the option 'spark.executor.instances', which seems to be only dealt with in the case of YARN in the source code of SparkSubmit.scala, is also present in the conf/spark-env.sh file under the standalone section, which would indicate that it is also available for this mode - Second, a post from Andrew Or states that this properties define the number of workers in the cluster, not the number of executors on a given worker. (http://apache-spark-user-list.1001560.n3.nabble.com/clarification-for-some-spark-on-yarn-configuration-options-td13692.html) Could anyone clarify this? :-) Thanks. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Determining-number-of-executors-within-RDD-tp15554p23262.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org