Hello,

I am able to submit Job on Spark cluster from Windows desktop. But the
executors are not able to run.
When I check the Spark UI (which is on Windows, as Driver is there) it shows
me JAVA_HOME, CLASS_PATH and other environment variables related to Windows.
I tried by setting spark.executor.extraClassPath to Unix classpath,
spark.getConf().setExecutorEnv("spark.executorEnv.JAVA_HOME",
"/usr/lib/jvm/java-6-openjdk-amd64")  within program but it didn't helped.

The executor gives below mentioned error. Can you please let me know what I
am missing ?

/14/10/28 09:36:00 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1563)
        at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:52)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:113)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:156)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after
[30 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
        at scala.concurrent.Await$.result(package.scala:107)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:125)
        at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:53)
        at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:52)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        ... 4 more/


Thanks,
  Shailesh





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Submitting-Spark-job-on-cluster-from-dev-environment-tp16989p17397.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to