Hi Team

I am trying to create a spark cluster on kubernetes with rbac enabled using
spark submit job. I am using spark-2.4.1 version.
Spark submit is able to launch the driver pod by contacting Kubernetes API
server but executor Pod is not getting launched. I can see the below
warning message in the driver pod logs.


*19/09/27 10:16:01 INFO TaskSchedulerImpl: Adding task set 0.0 with 3
tasks19/09/27 10:16:16 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered
and have sufficient resources*

I have faced this issue in standalone spark clusters and resolved it but
not sure how to resolve this issue in kubernetes. I have not given any
ResourceQuota configuration in kubernetes rbac yaml file and there is ample
memory and cpu available for any new pod/container to be launched.

Any leads/pointers to resolve this issue would be of great help.

Thanks and Regards
Manish Gupta

Reply via email to