Hello All,

We are having cluster with 50 Executors each with 4 Cores so can avail max.
200 Executors.

I am submitting a Spark application(JOB A) with scheduler.mode as FAIR and
dynamicallocation=true and it got all the available executors.

In the meantime, submitting another Spark Application (JOB B) with the
scheduler.mode as FAIR and dynamicallocation=true but it got only one
executor.

Normally this situation occurs when any of the JOB runs with the
Scheduler.mode= FIFO.

1) Have your ever faced this issue if so how to overcome this?.

I was in the impression that as soon as I submit the JOB B the Spark
Scheduler should distribute/release few resources from the JOB A and share
it with the JOB A in the Round Robin fashion?.

Appreciate your response !!!.


Thanks & Regards,
Gokula Krishnan* (Gokul)*

Reply via email to