Hi firemonk9,
What you're doing looks interesting. Can you share some more details?
Are you running the same spark context for each job, or are you running a
seperate spark context for each job?
Does your system need sharing of rdd's across multiple jobs? If yes, how do
you implement that?
Also wh
I strongly recommend spawning a new process for the Spark jobs. Much
cleaner separation. Your driver program won't be clobbered if the Spark job
dies, etc. It can even watch for failures and restart.
In the Scala standard library, the sys.process package has classes for
constructing and interopera
On 21 Apr 2015, at 17:34, Richard Marscher
mailto:rmarsc...@localytics.com>> wrote:
- There are System.exit calls built into Spark as of now that could kill your
running JVM. We have shadowed some of the most offensive bits within our own
application to work around this. You'd likely want to d
> hope this helps.
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Instantiating-starting-Spark-jobs-programmatically-tp22577p22584.html
>
message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Instantiating-starting-Spark-jobs-programmatically-tp22577p22584.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e
nformation/tips/best-practices in
this regard?
Cheers!
Ajay
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Instantiating-starting-Spark-jobs-programmatically-tp22577.html
Sent from the Apache Spark User List mailing list archive at