Hi Richard, Thanks for the response.
I should have added that the specific case where this becomes a problem is when one of the executors for that application is lost/killed prematurely, and the application attempts to spawn up a new executor without consideration as to whether an executor already exists on the other node. In your example, if one of the executors dies for some reason (memory exhaustion, or something crashed it), if there are still free cores on the other nodes, it will spawn an extra executor, which can lead to further memory problems on the other node that it just spawned on. Hopefully that clears up what I mean :) Mark. -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-Executor-Cores-question-tp14763p14805.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org