Sorry but I don't get the scope of the problem from your description. Seems
it's an improvement for spark standalone scheduler (i.e. not for yarn or
mesos)?

On Sat, Dec 3, 2016 at 4:27 AM, Hegner, Travis <theg...@trilliumit.com>
wrote:

> Hello,
>
>
> I've just created a JIRA to open up discussion of a new feature that I'd
> like to propose.
>
>
> https://issues.apache.org/jira/browse/SPARK-18689
>
>
> I'd love to get some feedback on the idea. I know that normally anything
> related to scheduling or queuing automatically throws up the "hard to
> implement" red flags, but the proposal contains a rather simple way to
> implement the concept, which delegates the scheduling logic to the
> actual kernel of each worker, rather than in any spark core code. I believe
> this to be more flexible and simpler to set up and maintain than dynamic
> allocation, and avoids the need for any preemption type of logic.
>
>
> The proposal does not contain any code. I am not (yet) familiar enough
> with the core spark code to confidently create an implementation.
>
>
> I appreciate your time and am looking forward to your feedback!
>
>
> Thanks,
>
>
> Travis
>

Reply via email to