Hii team,
I am working on spark on kubernetes and was working on a scenario where i
need to use spark on kubernetes in client mode from jupyter notebook from
two different kubernetes cluster . Is it possible in client mode to spin up
driver in one k8 cluster and executors in another k8 cluster .
Hey guys ,
I have an external api from which i can download the main jar from . when i
do a spark-submit ...all confs...https:api.call.com/somefile.jar . it gives
an error file already exist in the tmp directory and file content doesn't
match error . how can i fix this? Do i need to use an kubernet
c
> executor scaling but we are restricted to using client mode since we are
> only using Spark shells.
>
> Could anyone help me understand the barriers to getting dynamic executor
> scaling to work in client mode on Kubernetes?
>
> Thanks,
> Steven
>
> On Sat, Ma
Hiii ,
The dynamic executor scalling is working fine for spark on kubernetes
(latest from spark master repository ) in cluster mode . is the dynamic
executor scalling available for client mode ? if yes where can i find the
usage doc for same .
If no is there any PR open for this ?
Thanks ,
Pradee