I apologize if my previous explanation was unclear, and I realize I didn’t
provide enough context for my question.
The reason I want to submit a Spark application to a Kubernetes cluster
using the Spark Operator is that I want to use Kubernetes as the Cluster
Manager, rather than Standalone mode.
Oh. This issue is pretty straightforward to solve actually. Particularly,
in spark-3.5.2.
Just download the `spark-connect` maven jar and place it in
`$SPARK_HOME/jars`. Then rebuild the docker image. I saw that I had posted
a comment on this Jira as well. I could fix this up for standalone cluste
Hi Prabodh,
Thank you for your response.
As you can see from the following JIRA issue, it is possible to run the
Spark Connect Driver on Kubernetes:
https://issues.apache.org/jira/browse/SPARK-45769
However, this issue describes a problem that occurs when the Driver and
Executors are running on