Nishkam Ravi
> Cc: Greg , "user@spark.apache.org" <
> user@spark.apache.org>
>
> Subject: Re: clarification for some spark on yarn configuration options
>
> Hi Greg,
>
> From browsing the code quickly I believe SPARK_DRIVER_MEMORY is not
> actually picked
g mailto:greg.h...@rackspace.com>>,
"user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>
Subject: Re: clarification for some spark on yarn configuration options
Hi Greg,
>From browsing the code quickly I believe SPARK_DRIVER_MEMORY is not
fine.
>>
>> Greg
>>
>> From: Nishkam Ravi
>> Date: Monday, September 22, 2014 3:30 PM
>> To: Greg
>> Cc: Andrew Or , "user@spark.apache.org" <
>> user@spark.apache.org>
>>
>> Subject: Re: clarification for
ot;user@spark.apache.org" <
> user@spark.apache.org>
>
> Subject: Re: clarification for some spark on yarn configuration options
>
> Greg, if you look carefully, the code is enforcing that the
> memoryOverhead be lower (and not higher) than spark.driver.memory.
>
>
ocated less memory than the driver's memory
overhead value?
What am I misunderstanding here?
Greg
From: Andrew Or mailto:and...@databricks.com>>
Date: Tuesday, September 9, 2014 5:49 PM
To: Greg mailto:greg.h...@rackspace.com>>
Cc: "user@spark.apache.org<mailto:user@
, 2014 3:26 PM
To: Andrew Or mailto:and...@databricks.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>
Subject: Re: clarification for some spark on yarn configuration options
I thought I had this all figured out, but I
d value?
>
> What am I misunderstanding here?
>
> Greg
>
> From: Andrew Or
> Date: Tuesday, September 9, 2014 5:49 PM
> To: Greg
> Cc: "user@spark.apache.org"
> Subject: Re: clarification for some spark on yarn configuration options
>
> Hi Greg,
>
x27;s memory
overhead value?
What am I misunderstanding here?
Greg
From: Andrew Or mailto:and...@databricks.com>>
Date: Tuesday, September 9, 2014 5:49 PM
To: Greg mailto:greg.h...@rackspace.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@
Hi Greg,
SPARK_EXECUTOR_INSTANCES is the total number of workers in the cluster. The
equivalent "spark.executor.instances" is just another way to set the same
thing in your spark-defaults.conf. Maybe this should be documented. :)
"spark.yarn.executor.memoryOverhead" is just an additional margin a
Is SPARK_EXECUTOR_INSTANCES the total number of workers in the cluster or the
workers per slave node?
Is spark.executor.instances an actual config option? I found that in a commit,
but it's not in the docs.
What is the difference between spark.yarn.executor.memoryOverhead and
spark.executor.m
10 matches
Mail list logo