Re: Spark on YARN memory utilization

2014-12-09 Thread Denny Lee
Thanks Sandy! On Mon, Dec 8, 2014 at 23:15 Sandy Ryza wrote: > Another thing to be aware of is that YARN will round up containers to the > nearest increment of yarn.scheduler.minimum-allocation-mb, which defaults > to 1024. > > -Sandy > > On Sat, Dec 6, 2014 at 3:48 PM, Denny Lee wrote: > >> Got

Re: Spark on YARN memory utilization

2014-12-08 Thread Sandy Ryza
Another thing to be aware of is that YARN will round up containers to the nearest increment of yarn.scheduler.minimum-allocation-mb, which defaults to 1024. -Sandy On Sat, Dec 6, 2014 at 3:48 PM, Denny Lee wrote: > Got it - thanks! > > On Sat, Dec 6, 2014 at 14:56 Arun Ahuja wrote: > >> Hi Den

Re: Spark on YARN memory utilization

2014-12-06 Thread Denny Lee
Got it - thanks! On Sat, Dec 6, 2014 at 14:56 Arun Ahuja wrote: > Hi Denny, > > This is due the spark.yarn.memoryOverhead parameter, depending on what > version of Spark you are on the default of this may differ, but it should > be the larger of 1024mb per executor or .07 * executorMemory. > > Wh

Re: Spark on YARN memory utilization

2014-12-06 Thread Arun Ahuja
Hi Denny, This is due the spark.yarn.memoryOverhead parameter, depending on what version of Spark you are on the default of this may differ, but it should be the larger of 1024mb per executor or .07 * executorMemory. When you set executor memory, the yarn resource request is executorMemory + yarn