What was the JIRA that Alessandro opened ?
Thanks
> On Mar 16, 2015, at 10:34 PM, Mark Hamstra wrote:
>
> http://apache-spark-developers-list.1001551.n3.nabble.com/Job-priority-td10076.html#a10079
>
>> On Mon, Mar 16, 2015 at 10:26 PM, abhi wrote:
>> If i understand correctly , the above do
In that case, having pre configured pools, but using the correct pool at
code level might do.
On Tue, Mar 17, 2015 at 11:23 AM, abhi wrote:
> yes .
> Each generated job can have a different priority it is like a recursive
> function, where in each iteration generate job will be submitted to the
yes .
Each generated job can have a different priority it is like a recursive
function, where in each iteration generate job will be submitted to the
spark cluster based on the priority. jobs will lower priority or less than
some threshold will be discarded.
Thanks,
Abhi
On Mon, Mar 16, 2015 at
Hi Abhi,
You mean each task of a job can have different priority or job generated
via one job can have different priority?
On Tue, Mar 17, 2015 at 11:04 AM, Mark Hamstra
wrote:
>
> http://apache-spark-developers-list.1001551.n3.nabble.com/Job-priority-td10076.html#a10079
>
> On Mon, Mar 16, 2
http://apache-spark-developers-list.1001551.n3.nabble.com/Job-priority-td10076.html#a10079
On Mon, Mar 16, 2015 at 10:26 PM, abhi wrote:
> If i understand correctly , the above document creates pool for priority
> which is static in nature and has to be defined before submitting the job .
> .in
If i understand correctly , the above document creates pool for priority
which is static in nature and has to be defined before submitting the job .
.in my scenario each generated task can have different priority.
Thanks,
Abhi
On Mon, Mar 16, 2015 at 9:48 PM, twinkle sachdeva <
twinkle.sachd...@
Hi,
Maybe this is what you are looking for :
http://spark.apache.org/docs/1.2.0/job-scheduling.html#fair-scheduler-pools
Thanks,
On Mon, Mar 16, 2015 at 8:15 PM, abhi wrote:
> Hi
> Current all the jobs in spark gets submitted using queue . i have a
> requirement where submitted job will genera
Hi
Current all the jobs in spark gets submitted using queue . i have a
requirement where submitted job will generate another set of jobs with some
priority , which should again be submitted to spark cluster based on
priority ? Means job with higher priority should be executed first,Is
it feasib