I am having the same problems. Did you find a fix?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-3-build-with-hive-support-fails-tp22215p22309.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
+1 I agree we need this too. Looks like there is already an issue for it
here;
https://spark-project.atlassian.net/browse/SPARK-750
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unit-testing-jar-request-tp16475p18801.html
Sent from the Apache Spark User L
The code for this example is very simple;
object SparkMain extends App with Serializable {
val conf = new SparkConf(false)
//.setAppName("cc-test")
//.setMaster("spark://hadoop-001:7077")
//.setSparkHome("/tmp")
.set("spark.driver.host", "192.168.23.108")
.set("spark.cores.
I'm trying to run a local driver (on a development machine) and have this
driver communicate with the Spark master and workers however I'm having a
few problems getting the driver to connect and run a simple job from within
an IDE.
It all looks like it works but when I try to do something simple
Thanks AL!
Thats what I though. I've setup nexus to maintain spark libs and download
them when needed.
For development purposes. Suppose we have a dev cluster. Is it possible to
run the driver program locally (on a developers machine)?
I..e just run the driver from the ID and have it connect
Did you ever find a sln to this problem? I'm having similar issues.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-while-running-the-sampe-WordCount-program-from-Ecle-tp8388p11412.html
Sent from the Apache
Hi all,
We are developing an application which uses Spark & Hive to do static and
ad-hoc reporting. For these static reports, they take a number of parameters
and then run over a data set. We would like to make it easier to test
performance of these reports on a cluster.
If we have a test cluster
Hi premdass,
Where did you set spark.cleaner.referenceTracking = true/false?
Was this in your job-server conf?
Cheers.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-Cleanup-tp9182p10939.html
Sent from the Apache Spark User List mailing list archiv
Hi all,
I'm trying to get the jobserver working with Spark 1.0.1. I've got it
building, tests passing and it connects to my Spark master (e.g.
spark://hadoop-001:7077).
I can also pre-create contexts. These show up in the Spark master console
i.e. on hadoop-001:8080
The problem is that after I c