Re: Spark 1.3 build with hive support fails

2015-03-30 Thread nightwolf
I am having the same problems. Did you find a fix? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-3-build-with-hive-support-fails-tp22215p22309.html Sent from the Apache Spark User List mailing list archive at Nabble.com. -

Re: Unit testing jar request

2014-11-12 Thread nightwolf
+1 I agree we need this too. Looks like there is already an issue for it here; https://spark-project.atlassian.net/browse/SPARK-750 -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Unit-testing-jar-request-tp16475p18801.html Sent from the Apache Spark User L

Re: Running driver/SparkContent locally

2014-08-05 Thread nightwolf
The code for this example is very simple; object SparkMain extends App with Serializable { val conf = new SparkConf(false) //.setAppName("cc-test") //.setMaster("spark://hadoop-001:7077") //.setSparkHome("/tmp") .set("spark.driver.host", "192.168.23.108") .set("spark.cores.

Running driver/SparkContent locally

2014-08-05 Thread nightwolf
I'm trying to run a local driver (on a development machine) and have this driver communicate with the Spark master and workers however I'm having a few problems getting the driver to connect and run a simple job from within an IDE. It all looks like it works but when I try to do something simple

Re: Spark Deployment Patterns - Automated Deployment & Performance Testing

2014-08-05 Thread nightwolf
Thanks AL! Thats what I though. I've setup nexus to maintain spark libs and download them when needed. For development purposes. Suppose we have a dev cluster. Is it possible to run the driver program locally (on a developers machine)? I..e just run the driver from the ID and have it connect

Re: java.lang.IllegalStateException: unread block data while running the sampe WordCount program from Eclipse

2014-08-05 Thread nightwolf
Did you ever find a sln to this problem? I'm having similar issues. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-while-running-the-sampe-WordCount-program-from-Ecle-tp8388p11412.html Sent from the Apache

Spark Deployment Patterns - Automated Deployment & Performance Testing

2014-07-30 Thread nightwolf
Hi all, We are developing an application which uses Spark & Hive to do static and ad-hoc reporting. For these static reports, they take a number of parameters and then run over a data set. We would like to make it easier to test performance of these reports on a cluster. If we have a test cluster

Re: RDD Cleanup

2014-07-30 Thread nightwolf
Hi premdass, Where did you set spark.cleaner.referenceTracking = true/false? Was this in your job-server conf? Cheers. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/RDD-Cleanup-tp9182p10939.html Sent from the Apache Spark User List mailing list archiv

Spark & Ooyala Job Server

2014-07-30 Thread nightwolf
Hi all, I'm trying to get the jobserver working with Spark 1.0.1. I've got it building, tests passing and it connects to my Spark master (e.g. spark://hadoop-001:7077). I can also pre-create contexts. These show up in the Spark master console i.e. on hadoop-001:8080 The problem is that after I c