Try to reproduce what the spark-submit shell script does, setting up the class
path etc.
Sent from my rotary phone.
> On Nov 9, 2015, at 7:07 AM, Tathagata Das wrote:
>
> You cannot submit from eclipse to a cluster that easily. You can run locally
> (master set to local...), and it should
While I have a preference for Scala ( not surprising as a Typesafe person), the
DataFrame API gives feature and performance parity for Python. The RDD API
gives feature parity.
So, use what makes you most successful for other reasons ;)
Sent from my rotary phone.
> On Oct 6, 2015, at 4:14 P
You are mixing the 1.0.0 Spark SQL jar with Spark 1.4.0 jars in your build file
Sent from my rotary phone.
> On Jul 14, 2015, at 7:57 AM, ashwang168 wrote:
>
> Hello!
>
> I am currently using Spark 1.4.0, scala 2.10.4, and sbt 0.13.8 to try and
> create a jar file from a scala file (attached
There is no mechanism for keeping an RDD up to date with a changing source.
However you could set up a steam that watches for changes to the directory and
processes the new files or use the Hive integration in SparkSQL to run Hive
queries directly. (However, old query results will still grow sta
Show us the code. This shouldn't happen for the simple process you described
Sent from my rotary phone.
> On Mar 27, 2015, at 5:47 AM, jamborta wrote:
>
> Hi all,
>
> We have a workflow that pulls in data from csv files, then originally setup
> up of the workflow was to parse the data as it