Client mode would not support HDFS jar extraction.
I tried this:
sudo -u hdfs spark-submit --class org.apache.spark.examples.SparkPi
--deploy-mode cluster --master yarn
hdfs:///user/spark/spark-examples-1.2.0-cdh5.3.2-hadoop2.5.0-cdh5.3.2.jar 10
And it worked.
--
View this message in context:
Made it work by using yarn-cluster as master instead of local.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-not-working-when-application-jar-is-in-hdfs-tp21840p22281.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Looking at SparkSubmit#addJarToClasspath():
uri.getScheme match {
case "file" | "local" =>
...
case _ =>
printWarning(s"Skip remote jar $uri.")
It seems hdfs scheme is not recognized.
FYI
On Thu, Feb 26, 2015 at 6:09 PM, dilm wrote:
> I'm trying to run a spark applicat
Hi, did you resolve this issue or just work around it be keeping your
application jar local? Running into the same issue with 1.3.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-not-working-when-application-jar-is-in-hdfs-tp21840p22272.html
Se