Re: Spark Client

2015-06-03 Thread pavan kumar Kolamuri
Thanks Akhil, Richard, Oleg for your quick response . @Oleg we have actually tried the same thing but unfortunately when we throw exception Akka framework is catching all exceptions and thinking job failed and rerunning the spark jobs infinitely. Since in OneForOneStrategy in akka , max no of re

Re: Spark Client

2015-06-03 Thread Oleg Zhurakousky
I am not sure why Spark is relying on System.exit, hopefully someone will be able to provide a technical justification for it (very curious to hear it), but for your use case you can easily trap System.exit call before JVM exit with a simple implementation of SecurityManager and try/catch. Here are

Re: Spark Client

2015-06-03 Thread Richard Marscher
I think the short answer to the question is, no, there is no alternate API that will not use the System.exit calls. You can craft a workaround like is being suggested in this thread. For comparison, we are doing programmatic submission of applications in a long-running client application. To get ar

Re: Spark Client

2015-06-03 Thread Akhil Das
Did you try this? Create an sbt project like: // Create your context val sconf = new SparkConf().setAppName("Sigmoid").setMaster("spark://sigmoid:7077") val sc = new SparkContext(sconf) // Do some computations sc.parallelize(1 to 1).take(10).foreach(println) //Now return the exit stat

Re: Spark Client

2015-06-03 Thread pavan kumar Kolamuri
Hi akhil , sorry i may not conveying the question properly . Actually we are looking to Launch a spark job from a long running workflow manager, which invokes spark client via SparkSubmit. Unfortunately the client upon successful completion of the application exits with a System.exit(0) or System.

Re: Spark Client

2015-06-03 Thread Akhil Das
Run it as a standalone application. Create an sbt project and do sbt run? Thanks Best Regards On Wed, Jun 3, 2015 at 11:36 AM, pavan kumar Kolamuri < pavan.kolam...@gmail.com> wrote: > Hi guys , i am new to spark . I am using sparksubmit to submit spark jobs. > But for my use case i don't want i