and
> rest 4 were killed with below error and I do have enough resources
> available.
>
> On Tue, May 29, 2018 at 6:28 PM Anirudh Ramanathan
> wrote:
>
>> This looks to me like a kube-dns error that's causing the driver DNS
>> address to not resolve.
>> It
This looks to me like a kube-dns error that's causing the driver DNS
address to not resolve.
It would be worth double checking that kube-dns is indeed running (in the
kube-system namespace).
Often, with environments like minikube, kube-dns may exit/crashloop due to
lack of resource.
On Tue, May 29
There's a flag to the controller manager that is in charge of retention
policy for terminated or completed pods.
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#options
--terminated-pod-gc-threshold int32 Default: 12500
Number of terminated pods that
I think a pod disruption budget might actually work here. It can select the
spark driver pod using a label. Using that with a minAvailable value that's
appropriate here could do it.
In a more general sense, we do plan on some future work to support driver
recovery which should help long running jo
park community could share their
> experience around this. I would like to know more about you production
> experience and the monitoring tools you are using.
>
>
>
> Since spark on kubernetes is a relatively new addition to spark, I was
> wondering if structured streaming is stable in production. We were also
> evaluating Apache Beam with Flink.
>
>
>
> Regards,
>
> Krishna
>
>
>
>
>
>
--
Anirudh Ramanathan
like to know if the community be interested in such a feature.
>>
>> Cheers
>>
>> Marius
>>
>> ---------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>
--
Anirudh Ramanathan
spark-submit
> when there are remote dependencies.
>
>
> https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-remote-dependencies
>
> Please suggest
>
--
Anirudh Ramanathan