Spark memory settings let me very misunderstanding.

My code is as follows.

spark-1.0.2-bin-2.4.1/bin/spark-submit --class SimpleApp \
--master yarn \
--deploy-mode cluster \
--queue sls_queue_1 \
--num-executors 3 \
--driver-memory 6g \
--executor-memory 10g \
--executor-cores 5 \
target/scala-2.10/simple-project_2.10-1.0.jar \
/user/www/abc/output/2014-08-*/*


Set executor-memory 10g,
But, see the nodemanager java process.
Why is Xmx3072m instead of 10G?

jdk1.7.0_67//bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx3072m
-Djava.io.tmpdir=/data/hadoop/nodemanager/usercache/www/appcache/application_1408182086233_0013/container_1408182086233_0013_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/data/hadoop/logs/nodemanager/application_1408182086233_0013/container_1408182086233_0013_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.1.13.11 35389
attempt_1408182086233_0013_m_000002_0 4


Thx

cente...@gmail.com|齐忠

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to