Thanks Sandy!
On Mon, Dec 8, 2014 at 23:15 Sandy Ryza wrote:
> Another thing to be aware of is that YARN will round up containers to the
> nearest increment of yarn.scheduler.minimum-allocation-mb, which defaults
> to 1024.
>
> -Sandy
>
> On Sat, Dec 6, 2014 at 3:48 PM, Denny Lee wrote:
>
>> Got
Another thing to be aware of is that YARN will round up containers to the
nearest increment of yarn.scheduler.minimum-allocation-mb, which defaults
to 1024.
-Sandy
On Sat, Dec 6, 2014 at 3:48 PM, Denny Lee wrote:
> Got it - thanks!
>
> On Sat, Dec 6, 2014 at 14:56 Arun Ahuja wrote:
>
>> Hi Den
Got it - thanks!
On Sat, Dec 6, 2014 at 14:56 Arun Ahuja wrote:
> Hi Denny,
>
> This is due the spark.yarn.memoryOverhead parameter, depending on what
> version of Spark you are on the default of this may differ, but it should
> be the larger of 1024mb per executor or .07 * executorMemory.
>
> Wh
Hi Denny,
This is due the spark.yarn.memoryOverhead parameter, depending on what
version of Spark you are on the default of this may differ, but it should
be the larger of 1024mb per executor or .07 * executorMemory.
When you set executor memory, the yarn resource request is executorMemory +
yarn