One of the advantages of using spark-jobserver is that it lets you reuse
your contexts (create one context and run multiple jobs on it)

Since you can multiple jobs in one context, you can also share RDDs
(NamedRDD) between jobs ie: create a MLLib model and share it without the
need to persist it.
It is also useful if you want to run multiple SQL queries and you don't
need to create an SQLContext for every job.



On Thu, Dec 10, 2015 at 11:56 PM, manasdebashiskar <poorinsp...@gmail.com>
wrote:

> We use ooyala job server. It is great. It has a great set of api's to
> cancel
> job. Create adhoc or persistent context etc.
> It has great support in remote deploy and tests too which helps faster
> coding.
>
> The current version is missing job progress bar but I could not find the
> same in the hidden spark api's either.
>
> In any case I think job server is better than the hidden api's because it
> is
> not hidden.
>
> ..Manas
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-job-submission-REST-API-tp25670p25674.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to