Hi for a 30 GB executor how much offheap should I give along with yarn
memory over head is it ok?
On Thu, Jan 7, 2016 at 4:24 AM, Ted Yu wrote:
> Turns out that I should have specified -i to my former grep command :-)
>
> Thanks Marcelo
>
> But does this mean that specifying custom value for par
Turns out that I should have specified -i to my former grep command :-)
Thanks Marcelo
But does this mean that specifying custom value for parameter
spark.memory.offheap.size
would not take effect ?
Cheers
On Wed, Jan 6, 2016 at 2:47 PM, Marcelo Vanzin wrote:
> Try "git grep -i spark.memory.o
Try "git grep -i spark.memory.offheap.size"...
On Wed, Jan 6, 2016 at 2:45 PM, Ted Yu wrote:
> Maybe I looked in the wrong files - I searched *.scala and *.java files (in
> latest Spark 1.6.0 RC) for '.offheap.' but didn't find the config.
>
> Can someone enlighten me ?
>
> Thanks
>
> On Wed, Jan
Maybe I looked in the wrong files - I searched *.scala and *.java files (in
latest Spark 1.6.0 RC) for '.offheap.' but didn't find the config.
Can someone enlighten me ?
Thanks
On Wed, Jan 6, 2016 at 2:35 PM, Jakob Odersky wrote:
> Check the configuration guide for a description on units (
> h
Check the configuration guide for a description on units (
http://spark.apache.org/docs/latest/configuration.html#spark-properties).
In your case, 5GB would be specified as 5g.
On 6 January 2016 at 10:29, unk1102 wrote:
> Hi As part of Spark 1.6 release what should be ideal value or unit for
> s
Hi As part of Spark 1.6 release what should be ideal value or unit for
spark.memory.offheap.size I have set as 5000 I assume it will be 5GB is it
correct? Please guide.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/What-should-be-the-ideal-value-unit-for-