AFAIK completed can happen in case of failures as well, check here:
https://github.com/kubernetes/kubernetes/blob/7f23a743e8c23ac6489340bbb34fa6f1d392db9d/pkg/client/conditions/conditions.go#L61

The phase of the pod should be `succeeded` to make a conclusion. This is
https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/pkg/controller/sparkapplication/sparkapp_util.go#L75
how the spark operator uses that info to deduce the application status.

Stavros

On Wed, Mar 13, 2019 at 5:48 PM Chandu Kavar <ccka...@gmail.com> wrote:

> Hi,
>
> We are running Spark jobs to Kubernetes (using Spark 2.4.0 and cluster
> mode). To get the status of the spark job we check the status of the driver
> pod (using Kubernetes REST API).
>
> Is it okay to assume that spark job is successful if the status of the
> driver pod is COMPLETED?
>
> Thanks,
> Chandu
>
>

Reply via email to