Re: clarification for some spark on yarn configuration options

2014-09-23 Thread Andrew Or
Nishkam Ravi > Cc: Greg , "user@spark.apache.org" < > user@spark.apache.org> > > Subject: Re: clarification for some spark on yarn configuration options > > Hi Greg, > > From browsing the code quickly I believe SPARK_DRIVER_MEMORY is not > actually picked

Re: clarification for some spark on yarn configuration options

2014-09-23 Thread Greg Hill
g mailto:greg.h...@rackspace.com>>, "user@spark.apache.org<mailto:user@spark.apache.org>" mailto:user@spark.apache.org>> Subject: Re: clarification for some spark on yarn configuration options Hi Greg, >From browsing the code quickly I believe SPARK_DRIVER_MEMORY is not

Re: clarification for some spark on yarn configuration options

2014-09-22 Thread Andrew Or
fine. >> >> Greg >> >> From: Nishkam Ravi >> Date: Monday, September 22, 2014 3:30 PM >> To: Greg >> Cc: Andrew Or , "user@spark.apache.org" < >> user@spark.apache.org> >> >> Subject: Re: clarification for

Re: clarification for some spark on yarn configuration options

2014-09-22 Thread Nishkam Ravi
re allocating for Spark. My understanding >> was that the overhead values should be quite a bit lower (and by default >> they are). >> >> Also, why must the executor be allocated less memory than the driver's >> memory overhead value? >> >> What am I misun

Re: clarification for some spark on yarn configuration options

2014-09-22 Thread Greg Hill
ocated less memory than the driver's memory overhead value? What am I misunderstanding here? Greg From: Andrew Or mailto:and...@databricks.com>> Date: Tuesday, September 9, 2014 5:49 PM To: Greg mailto:greg.h...@rackspace.com>> Cc: "user@spark.apache.org<mailto:user@

Re: clarification for some spark on yarn configuration options

2014-09-22 Thread Greg Hill
, 2014 3:26 PM To: Andrew Or mailto:and...@databricks.com>> Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" mailto:user@spark.apache.org>> Subject: Re: clarification for some spark on yarn configuration options I thought I had this all figured out, but I&#x

Re: clarification for some spark on yarn configuration options

2014-09-22 Thread Nishkam Ravi
d value? > > What am I misunderstanding here? > > Greg > > From: Andrew Or > Date: Tuesday, September 9, 2014 5:49 PM > To: Greg > Cc: "user@spark.apache.org" > Subject: Re: clarification for some spark on yarn configuration options > > Hi Greg, >

Re: clarification for some spark on yarn configuration options

2014-09-22 Thread Greg Hill
x27;s memory overhead value? What am I misunderstanding here? Greg From: Andrew Or mailto:and...@databricks.com>> Date: Tuesday, September 9, 2014 5:49 PM To: Greg mailto:greg.h...@rackspace.com>> Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" mailto:user@

Re: clarification for some spark on yarn configuration options

2014-09-09 Thread Andrew Or
Hi Greg, SPARK_EXECUTOR_INSTANCES is the total number of workers in the cluster. The equivalent "spark.executor.instances" is just another way to set the same thing in your spark-defaults.conf. Maybe this should be documented. :) "spark.yarn.executor.memoryOverhead" is just an additional margin a