I believe this corresponds to the 0.6 of the whole heap that is
allocated for caching partitions. See spark.storage.memoryFraction on
http://spark.apache.org/docs/latest/configuration.html 0.6 of 4GB is
about 2.3GB.

The note there is important, that you probably don't want to exceed
the JVM old generation size with this parameter.

On Tue, Dec 16, 2014 at 12:53 AM, Pala M Muthaia
<mchett...@rocketfuelinc.com> wrote:
> Hi,
>
> Running Spark 1.0.1 on Yarn 2.5
>
> When i specify --executor-memory 4g, the spark UI shows each executor as
> having only 2.3 GB, and similarly for 8g, only 4.6 GB.
>
> I am guessing that the executor memory corresponds to the container memory,
> and that the task JVM gets only a percentage of the container total memory.
> Is there a yarn or spark parameter to tune this so that my task JVM actually
> gets 6GB out of the 8GB for example?
>
>
> Thanks.
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to