I got it to work by running it in client mode and using the `local://*`
prefix. My external cluster manager gets injected just fine.

On Fri, Feb 7, 2025 at 12:38 AM Dejan Pejchev <de...@gr-oss.io> wrote:

> Hello Spark community!
>
> My name is Dejan Pejchev, and I am a Software Engineer working at
> G-Research, and I am a maintainer of our Kubernetes multi-cluster batch
> scheduler called Armada.
>
> We are trying to build an integration with Spark, where we would like to
> use the spark-submit with a master armada://xxxx, which will then submit
> the driver and executor jobs to Armada.
>
> I understood the concept of the ExternalClusterManager and how I can write
> and provide a new implementation, but I am not clear how can I extend Spark
> to accept it.
>
> I see that in SparkSubmit.scala there is a check for master URLs and it
> fails if it isn't any of local, mesos, k8s and yarn.
>
> What is the correct approach for my use case?
>
> Thanks in advance,
> Dejan Pejchev
>

Reply via email to