Thanks Akhil so such!

It turns out to be HADOOP_HOME not set.

Dong Lei
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Monday, June 8, 2015 3:12 PM
To: Dong Lei
Cc: user@spark.apache.org
Subject: Re: Driver crash at the end with InvocationTargetException when 
running SparkPi

Can you look in your worker logs for more detailed stack-trace? If its about 
winutils.exe you can look at these links to get it resolved.

- http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7
- https://issues.apache.org/jira/browse/SPARK-2356

Thanks
Best Regards

On Mon, Jun 8, 2015 at 9:01 AM, Dong Lei 
<dong...@microsoft.com<mailto:dong...@microsoft.com>> wrote:
Hi spark users:

After I submitted a SparkPi job to spark, the driver crashed at the end of the 
job with the following log:

WARN EventLoggingListener: Event log dir 
file:/d:/data/SparkWorker/work/driver-20150607200517-0002/logs/event does not 
exists, will newly create one.
Exception in thread "main" java.lang.reflect.InvocationTargetException
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
                at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:606)
                at 
org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:59)
                at 
org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: java.lang.NullPointerException
                at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
                at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
                at org.apache.hadoop.util.Shell.run(Shell.java:418)
                at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
                at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
                at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
                at 
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
                at 
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
                at 
org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:135)
                at org.apache.spark.SparkContext.<init>(SparkContext.scala:401)
                at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
                at org.apache.spark.examples.SparkPi.main(SparkPi.scala)

From the log, I can see that the driver has added jars from HDFS, connected to 
master, scheduled executors and all the executors were running. And then this 
error occurred.

The command I use to submit job(I’m running spark 1.3.1 with standalone mode on 
windows):
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://localhost:7077 \
  --deploy-mode cluster
Hdfs://localhost:443/spark-examples-1.3.1-hadoop2.4.0.jar \
  1000


Any ideas about the error?
I’ve found a similar error in JIRA 
https://issues.apache.org/jira/browse/SPARK-1407 but It only occurred at 
FileLogger when using yarn and eventlog set to HDFS. In my case, I use 
standalone mode and event log set to local, and my error is caused by 
Hadoop.util.Shell.runCommand.


Best Regards
Dong Lei

Reply via email to