Hello,
I've just created a JIRA to open up discussion of a new feature that I'd like
to propose.
https://issues.apache.org/jira/browse/SPARK-18689
I'd love to get some feedback on the idea. I know that normally anything
related to scheduling or queuing automatically throws up the "hard to
__
From: Shuai Lin
Sent: Saturday, December 3, 2016 06:52
To: Hegner, Travis
Cc: dev@spark.apache.org
Subject: Re: SPARK-18689: A proposal for priority based app scheduling
utilizing linux cgroups.
Sorry but I don't get the scope of the problem from your description. Seems
it'
. In short, *I
am not trying to write a scheduler*: I am trying to slightly (and optionally)
tweak the way executors are allocated and launched, so that I can more
intuitively and more optimally utilize my small spark cluster.
Thanks,
Travis
From: Steve Loug
'm trying to
solve currently.
Sorry that the patch is pretty rough still, as I'm still getting my head
wrapped around spark's code base structure. Looking forward to any feedback.
Thanks,
Travis
From: Hegner, Travis
Sent: Tuesday, December 6
oal with this patch is to essentially eliminate the static allocation of
cpu cores at all. Give each app time on the cpu equal to the number of shares
it has as a percentage of the total pool.
Thanks,
Travis
From: Jörn Franke
Sent: Thursday, December 15, 2016 12
Thanks,
Travis
From: Reynold Xin
Sent: Thursday, December 15, 2016 14:07
To: Hegner, Travis
Cc: Jörn Franke; Apache Spark Dev
Subject: Re: SPARK-18689: A proposal for priority based app scheduling
utilizing linux cgroups.
In general this falls directly i