You can terminate job group from spark context,  Youll have to send across
the spark context to your task.
On 21 Jun 2014 01:09, "Piotr Kołaczkowski" <pkola...@datastax.com> wrote:

> If the task detects unrecoverable error, i.e. an error that we can't
> expect to fix by retrying nor moving the task to another node, how to stop
> the job / prevent Spark from retrying it?
>
> def process(taskContext: TaskContext, data: Iterator[T]) {
>    ...
>
>    if (unrecoverableError) {
>       ??? // terminate the job immediately
>    }
>    ...
>  }
>
> Somewhere else:
> rdd.sparkContext.runJob(rdd, something.process _)
>
>
> Thanks,
> Piotr
>
>
> --
> Piotr Kolaczkowski, Lead Software Engineer
> pkola...@datastax.com
>
> http://www.datastax.com/
> 777 Mariners Island Blvd., Suite 510
> San Mateo, CA 94404
>

Reply via email to