Hello:

I'm trying to launch an application in a yarn cluster with the following
command


/opt/spark/bin/spark-submit --class com.abrandon.upm.GenerateKMeansData
--master yarn --deploy-mode client /opt/spark/BenchMark-1.0-SNAPSHOT.jar
kMeans 500000000 4 5 0.9 8

The last bit after the jar file are just the parameters of the
GenerateKMeansData application. I get the following error

16/02/17 15:31:01 INFO Client: Application report for
application_1455721308385_0005 (state: ACCEPTED)
16/02/17 15:31:02 INFO Client: Application report for
application_1455721308385_0005 (state: FAILED)
16/02/17 15:31:02 INFO Client: 
         client token: N/A
         diagnostics: Application application_1455721308385_0005 failed 2 times 
due
to AM Container for appattempt_1455721308385_0005_000002 exited with 
exitCode: -1000
For more detailed output, check application tracking
page:http://stremi-17.reims.grid5000.fr:8088/proxy/application_1455721308385_0005/Then,
click on links to logs of each attempt.
Diagnostics: File
file:/tmp/spark-5a98e9d4-6f90-446d-9bec-f0d30bffae32/__spark_conf__2242504518276040137.zip
does not exist
java.io.FileNotFoundException: File
file:/tmp/spark-5a98e9d4-6f90-446d-9bec-f0d30bffae32/__spark_conf__2242504518276040137.zip
does not exist
        at
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
        at
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
        at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
        at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
        at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
        at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1455723059732
         final status: FAILED
         tracking URL:
http://stremi-17.reims.grid5000.fr:8088/cluster/app/application_1455721308385_0005
         user: abrandon
16/02/17 15:31:02 ERROR SparkContext: Error initializing SparkContext.

I think the important part is Diagnostics: File
file:/tmp/spark-5a98e9d4-6f90-446d-9bec-f0d30bffae32/__spark_conf__2242504518276040137.zip
does not exist. Does anybody know what that means?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-executing-Spark-application-on-YARN-tp26248.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to