Hi all, 

I just installed a mesos 0.19 cluster. I am failing to execute basic SparkQL
operations on text files with Spark 1.0.1 with the spark-shell.  


I have one Mesos master without zookeeper and 4 mesos slaves. 
All nodes are running JDK 1.7.51 and Scala 2.10.4. 
The spark package is uploaded to hdfs and the user running the mesos slave
has permission to access to it. 
I am runnning HDFS from the latest CDH5. 
I tried both with the pre-built CDH5 spark package available from
http://spark.apache.org/downloads.html and by packaging spark with sbt
0.13.2, JDK 1.7.51 and scala 2.10.4 as explained here
http://mesosphere.io/learn/run-spark-on-mesos/


No matter what I try, when I execute the following code on the spark-shell : 



The job fails with the following error reported by the mesos slave nodes: 






Note that runnning a simple map+reduce job on the same hdfs files with the
same installation works fine:




The hdfs files contain just plain csv files: 




spark-env.sh look like this: 






Any help, comment or pointer would be greatly appreciated!

Thanks in advance


Svend







--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/ClassNotFoundException-line11-read-when-loading-an-HDFS-text-file-with-SparkQL-in-spark-shell-tp9954.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to