Hi
Set up project under Eclipse using Maven:
org.apache.spark
spark-core_2.10
1.0.0
Simple example fails:
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
.setMaster("local")
Wow! What a quick reply!
adding
org.apache.hadoop
hadoop-client
2.4.0
solved the problem.
But now I get
14/06/03 19:52:50 ERROR Shell: Failed to locate the winutils binary in the
hadoop binary path
java.io.IOException: Could
Using
org.apache.spark
spark-core_2.10
1.0.0
I can create simple test and run under Eclipse.
But when I try to deploy on test server I have dependencies problems.
1. Spark requires
akka-remote_2.10
2.2.3-shaded-
I am using Maven from Eclipse
dependency:tree shows
[INFO] +- org.apache.spark:spark-core_2.10:jar:1.0.0:compile
[INFO] | +- net.java.dev.jets3t:jets3t:jar:0.7.1:runtime
[INFO] | +- org.apache.curator:curator-recipes:jar:2.4.0:compile
[INFO] | | +- org.apache.curator:curator-framework:jar:2.
Thanks for the hint.
I removed signature info from same jar and JVM is happy now.
But problem remains, several same jar's but different versions, not good.
Spark itself is very, very promising, I am very excited
Thank you all
toivo
--
View this message in context:
http://apache-spark-user-
According to „DataStax Brings Spark To Cassandra“ press realese:
„DataStax has partnered with Databricks, the company founded by the creators
of Apache Spark, to build a supported, open source integration between the
two platforms. The partners expect to have the integration ready by this
summer.“