Re: Spark job status on Kubernetes

2019-03-13 Thread Stavros Kontopoulos
AFAIK completed can happen in case of failures as well, check here: https://github.com/kubernetes/kubernetes/blob/7f23a743e8c23ac6489340bbb34fa6f1d392db9d/pkg/client/conditions/conditions.go#L61 The phase of the pod should be `succeeded` to make a conclusion. This is https://github.com/GoogleCloud

Spark job status on Kubernetes

2019-03-13 Thread Chandu Kavar
Hi, We are running Spark jobs to Kubernetes (using Spark 2.4.0 and cluster mode). To get the status of the spark job we check the status of the driver pod (using Kubernetes REST API). Is it okay to assume that spark job is successful if the status of the driver pod is COMPLETED? Thanks, Chandu