Re: bitten by spark.yarn.executor.memoryOverhead

2015-03-02 Thread Sean Owen
The problem is, you're left with two competing options then. You can go through the process of deprecating the absolute one and removing it eventually. You take away ability to set this value directly though, meaning you'd have to set absolute values by depending on a % of what you set your app mem

Re: bitten by spark.yarn.executor.memoryOverhead

2015-03-02 Thread Ted Yu
bq. that 0.1 is "always" enough? The answer is: it depends (on use cases). The value of 0.1 has been validated by several users. I think it is a reasonable default. Cheers On Mon, Mar 2, 2015 at 8:36 AM, Ryan Williams wrote: > For reference, the initial version of #3525 >

Re: bitten by spark.yarn.executor.memoryOverhead

2015-03-02 Thread Ryan Williams
For reference, the initial version of #3525 (still open) made this fraction a configurable value, but consensus went against that being desirable so I removed it and marked SPARK-4665 as "won't fix". My

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Corey Nolet
Thanks for taking this on Ted! On Sat, Feb 28, 2015 at 4:17 PM, Ted Yu wrote: > I have created SPARK-6085 with pull request: > https://github.com/apache/spark/pull/4836 > > Cheers > > On Sat, Feb 28, 2015 at 12:08 PM, Corey Nolet wrote: > >> +1 to a better default as well. >> >> We were working

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Ted Yu
I have created SPARK-6085 with pull request: https://github.com/apache/spark/pull/4836 Cheers On Sat, Feb 28, 2015 at 12:08 PM, Corey Nolet wrote: > +1 to a better default as well. > > We were working find until we ran against a real dataset which was much > larger than the test dataset we were

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Corey Nolet
+1 to a better default as well. We were working find until we ran against a real dataset which was much larger than the test dataset we were using locally. It took me a couple days and digging through many logs to figure out this value was what was causing the problem. On Sat, Feb 28, 2015 at 11:

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Ted Yu
Having good out-of-box experience is desirable. +1 on increasing the default. On Sat, Feb 28, 2015 at 8:27 AM, Sean Owen wrote: > There was a recent discussion about whether to increase or indeed make > configurable this kind of default fraction. I believe the suggestion > there too was that 9

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Sean Owen
There was a recent discussion about whether to increase or indeed make configurable this kind of default fraction. I believe the suggestion there too was that 9-10% is a safer default. Advanced users can lower the resulting overhead value; it may still have to be increased in some cases, but a fat