Mark, Thanks for the response.

Let me rephrase my statements.

"I am submitting a Spark application(*Application*#A) with scheduler.mode
as FAIR and dynamicallocation=true and it got all the available executors.

In the meantime, submitting another Spark Application (*Application* # B)
with the scheduler.mode as FAIR and dynamicallocation=true but it got only
one executor. "

Thanks & Regards,
Gokula Krishnan* (Gokul)*

On Thu, Jul 20, 2017 at 4:56 PM, Mark Hamstra <m...@clearstorydata.com>
wrote:

> First, Executors are not allocated to Jobs, but rather to Applications. If
> you run multiple Jobs within a single Application, then each of the Tasks
> associated with Stages of those Jobs has the potential to run on any of the
> Application's Executors. Second, once a Task starts running on an Executor,
> it has to complete before another Task can be scheduled using the prior
> Task's resources -- the fair scheduler is not preemptive of running Tasks.
>
> On Thu, Jul 20, 2017 at 1:45 PM, Gokula Krishnan D <email2...@gmail.com>
> wrote:
>
>> Hello All,
>>
>> We are having cluster with 50 Executors each with 4 Cores so can avail
>> max. 200 Executors.
>>
>> I am submitting a Spark application(JOB A) with scheduler.mode as FAIR
>> and dynamicallocation=true and it got all the available executors.
>>
>> In the meantime, submitting another Spark Application (JOB B) with the
>> scheduler.mode as FAIR and dynamicallocation=true but it got only one
>> executor.
>>
>> Normally this situation occurs when any of the JOB runs with the
>> Scheduler.mode= FIFO.
>>
>> 1) Have your ever faced this issue if so how to overcome this?.
>>
>> I was in the impression that as soon as I submit the JOB B the Spark
>> Scheduler should distribute/release few resources from the JOB A and share
>> it with the JOB A in the Round Robin fashion?.
>>
>> Appreciate your response !!!.
>>
>>
>> Thanks & Regards,
>> Gokula Krishnan* (Gokul)*
>>
>
>

Reply via email to