Hi,

When i submit java spark job in cluster mode, i'm getting following
exception.

*LOG TRACE :*

INFO yarn.ExecutorRunnable: Setting up executor with commands:
List({{JAVA_HOME}}/bin/java, -server, -XX:OnOutOfMemoryError='kill
 %p', -Xms1024m, -Xmx1024m, -Djava.io.tmpdir={{PWD}}/tmp,
'-Dspark.ui.port=0', '-Dspark.driver.port=48309',
-Dspark.yarn.app.container.log.dir=<LOG
_DIR>, org.apache.spark.executor.CoarseGrainedExecutorBackend,
--driver-url, akka.tcp://sparkDriver@ip:port/user/CoarseGrainedScheduler,
 --executor-id, 2, --hostname, hostname , --cores, 1, --app-id,
application_1441965028669_9009, --user-class-path, file:$PWD
/__app__.jar, --user-class-path, file:$PWD/json-20090211.jar, 1>,
<LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr).

I have a cluster of 11 machines (9 - 64 GB memory and 2 - 32 GB memory ).
my input data of size 128 GB.

How to solve this exception ? is it depends on driver.memory and
execuitor.memory setting ?


*Thanks*,
<https://in.linkedin.com/in/ramkumarcs31>

Reply via email to