Found the solution. I was pointing to the wrong hadoop conf directory. I feel
so stupid :P
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-executing-Spark-application-on-YARN-tp26248p26266.html
Sent from the Apache Spark User List mailing list arc
Previous message seems to be a problem with the timestamp of each file.Before
I was copying the jar file to each slave node, so I left the jar only on the
master node. I rerun the applications but now I get the following INFO
messages:
16/02/18 11:22:58 INFO Client: Source and destination file syst
1. It happens to all the classes inside the jar package.
2. I didn't do any changes.
- I have three nodes: one master and two slaves in the conf/slaves
file
- In spark-env.sh I just set the HADOOP_CONF_DIR parameter
- In spark-defaults.conf I didn't change anything
3. The cont
Hi,
Thanks for the question.
I do see this in the bottom:
16/02/17 15:31:02 ERROR SparkContext: Error initializing SparkContext.
Some questions to help get more understanding:
1) Does this happen to any other jobs?
2) Any changes to the Spark setup in recent time?
3) Could you open the trackin