Re: Spark.Executor.Cores question

2015-10-28 Thread mkhaitman
Unfortunately setting the executor memory to prevent multiple executors from the same framework would inherently mean that we'd need to set just over half the available worker memory for each node. So if each node had 32GB of worker memory, then the application would need to set 17GB to absolutely

Re: Spark.Executor.Cores question

2015-10-27 Thread Richard Marscher
Ah I see, that's a bit more complicated =). If it's possible, would using `spark.executor.memory` to set the available worker memory used by executors help alleviate the problem of running on a node that already has an executor on it? I would assume that would have a constant worst case overhead pe

Re: Spark.Executor.Cores question

2015-10-27 Thread mkhaitman
Hi Richard, Thanks for the response. I should have added that the specific case where this becomes a problem is when one of the executors for that application is lost/killed prematurely, and the application attempts to spawn up a new executor without consideration as to whether an executor alrea

Re: Spark.Executor.Cores question

2015-10-27 Thread Richard Marscher
Hi Mark, if you know your cluster's number of workers and cores per worker you can set this up when you create a SparkContext and shouldn't need to tinker with the 'spark.executor.cores' setting. That setting is for running multiple executors per application per worker, which you are saying you do

Spark.Executor.Cores question

2015-10-23 Thread mkhaitman
Regarding the 'spark.executor.cores' config option in a Standalone spark environment, I'm curious about whether there's a way to enforce the following logic: *- Max cores per executor = 4* ** Max executors PER application PER worker = 1* In order to force better balance across all workers, I want