I think I mean the job that Mark is talking about but that's also the thing
that's being stopped by the dcos command and (hopefully) the thing that's
being stopped by the dispatcher, isn't it?
It would be really good if the issue (SPARK-17064) would be resolved, but
for now I'll do with cancelling
You're using the proper Spark definition of "job", but I believe Richard
means "driver".
On Wed, Oct 5, 2016 at 2:17 PM, Mark Hamstra
wrote:
> Yes and no. Something that you need to be aware of is that a Job as such
> exists in the DAGScheduler as part of the Application running on the
> Driver
Yes and no. Something that you need to be aware of is that a Job as such
exists in the DAGScheduler as part of the Application running on the
Driver. When talking about stopping or killing a Job, however, what people
often mean is not just stopping the DAGScheduler from telling the Executors
to r
If running in client mode, just kill the job. If running in cluster mode,
the Spark Dispatcher exposes an HTTP API for killing jobs. I don't think
this is externally documented, so you might have to check the code to find
this endpoint. If you run in dcos, you can just run "dcos spark kill ".
Y
Hi,
how can I stop a long running job?
We're having Spark running in Mesos Coarse-grained mode. Suppose the user
start a long running job, makes a mistake, changes a transformation and
runs the job again. In this case I'd like to cancel the first job and after
that start the second job. It would