Hi Sophia, did you ever resolve this?

A common cause for not giving resources to the job is that the RM cannot
communicate with the workers.
This itself has many possible causes. Do you have a full stack trace from
the logs?

Andrew


2014-06-13 0:46 GMT-07:00 Sophia <sln-1...@163.com>:

> With the yarn-client mode,I submit a job from client to yarn,and the spark
> file spark-env.sh:
> export HADOOP_HOME=/usr/lib/hadoop
> export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
> SPARK_EXECUTOR_INSTANCES=4
> SPARK_EXECUTOR_CORES=1
> SPARK_EXECUTOR_MEMORY=1G
> SPARK_DRIVER_MEMORY=2G
> SPARK_YARN_APP_NAME="Spark 1.0.0"
>
> the command line and the result:
>  $export JAVA_HOME=/usr/java/jdk1.7.0_45/
> $export PATH=$JAVA_HOME/bin:$PATH
> $  ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> yarn-client
> ./bin/spark-submit: line 44: /usr/lib/spark/bin/spark-class: Success
> How can I do with it? The yarn only accept the job but it cannot give
> memory
> to the job.Why?
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to