Nishkam Ravi
> Cc: Greg , "user@spark.apache.org" <
> user@spark.apache.org>
>
> Subject: Re: clarification for some spark on yarn configuration options
>
> Hi Greg,
>
> From browsing the code quickly I believe SPARK_DRIVER_MEMORY is not
> actually picked
g mailto:greg.h...@rackspace.com>>,
"user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>
Subject: Re: clarification for some spark on yarn configuration options
Hi Greg,
>From browsing the code quickly I believe SPARK_DRIVER_MEMORY is not
fine.
>>
>> Greg
>>
>> From: Nishkam Ravi
>> Date: Monday, September 22, 2014 3:30 PM
>> To: Greg
>> Cc: Andrew Or , "user@spark.apache.org" <
>> user@spark.apache.org>
>>
>> Subject: Re: clarification for
re allocating for Spark. My understanding
>> was that the overhead values should be quite a bit lower (and by default
>> they are).
>>
>> Also, why must the executor be allocated less memory than the driver's
>> memory overhead value?
>>
>> What am I misun
ocated less memory than the driver's memory
overhead value?
What am I misunderstanding here?
Greg
From: Andrew Or mailto:and...@databricks.com>>
Date: Tuesday, September 9, 2014 5:49 PM
To: Greg mailto:greg.h...@rackspace.com>>
Cc: "user@spark.apache.org<mailto:user@
, 2014 3:26 PM
To: Andrew Or mailto:and...@databricks.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>
Subject: Re: clarification for some spark on yarn configuration options
I thought I had this all figured out, but I
d value?
>
> What am I misunderstanding here?
>
> Greg
>
> From: Andrew Or
> Date: Tuesday, September 9, 2014 5:49 PM
> To: Greg
> Cc: "user@spark.apache.org"
> Subject: Re: clarification for some spark on yarn configuration options
>
> Hi Greg,
>
x27;s memory
overhead value?
What am I misunderstanding here?
Greg
From: Andrew Or mailto:and...@databricks.com>>
Date: Tuesday, September 9, 2014 5:49 PM
To: Greg mailto:greg.h...@rackspace.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@
Hi Greg,
SPARK_EXECUTOR_INSTANCES is the total number of workers in the cluster. The
equivalent "spark.executor.instances" is just another way to set the same
thing in your spark-defaults.conf. Maybe this should be documented. :)
"spark.yarn.executor.memoryOverhead" is just an additional margin a