Yes export worked.
Thank you
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560p17180.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Did you `export` the environment variables? Also, are you running in client
mode or cluster mode? If it still doesn't work you can try to set these
through the spark-submit command lines --num-executors, --executor-cores,
and --executor-memory.
2014-10-23 19:25 GMT-07:00 firemonk9 :
> Hi,
>
>
Hi,
I am facing same problem. My spark-env.sh has below entries yet I see the
yarn container with only 1G and yarn only spawns two workers.
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=3G
SPARK_EXECUTOR_INSTANCES=5
Please let me know if you are able to resolve this issue.
Thank you
--
Vi
Hi Sophia, did you ever resolve this?
A common cause for not giving resources to the job is that the RM cannot
communicate with the workers.
This itself has many possible causes. Do you have a full stack trace from
the logs?
Andrew
2014-06-13 0:46 GMT-07:00 Sophia :
> With the yarn-client mode