This is the fraction available for caching, which is 60% * 90% * total by default.
On Fri, Apr 17, 2015 at 11:30 AM, podioss <grega...@hotmail.com> wrote: > Hi, > i am a bit confused with the executor-memory option. I am running > applications with Standalone cluster manager with 8 workers with 4gb memory > and 2 cores each and when i submit my application with spark-submit i use > --executor-memory 1g. > In the web ui in the completed applications table i see that my application > was correctly submitted with 1g memory per node as expected but when i check > the executors tab of the application i see that every executor launched with > 530mb which is about half the memory of the configuration. > I would really appreciate an explanation if anyone knew. > > Thanks > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Executor-memory-in-web-UI-tp22538.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org