To put this on the devs' radar, I suggest creating a JIRA for it (and
checking first if one already exists).

issues.apache.org/jira/

Nick

On Tue, May 19, 2015 at 1:34 PM Matei Zaharia <matei.zaha...@gmail.com>
wrote:

> Yeah, this definitely seems useful there. There might also be some ways to
> cap the application in Mesos, but I'm not sure.
>
> Matei
>
> On May 19, 2015, at 1:11 PM, Thomas Dudziak <tom...@gmail.com> wrote:
>
> I'm using fine-grained for a multi-tenant environment which is why I would
> welcome the limit of tasks per job :)
>
> cheers,
> Tom
>
> On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia <matei.zaha...@gmail.com>
> wrote:
>
>> Hey Tom,
>>
>> Are you using the fine-grained or coarse-grained scheduler? For the
>> coarse-grained scheduler, there is a spark.cores.max config setting that
>> will limit the total # of cores it grabs. This was there in earlier
>> versions too.
>>
>> Matei
>>
>> > On May 19, 2015, at 12:39 PM, Thomas Dudziak <tom...@gmail.com> wrote:
>> >
>> > I read the other day that there will be a fair number of improvements
>> in 1.4 for Mesos. Could I ask for one more (if it isn't already in there):
>> a configurable limit for the number of tasks for jobs run on Mesos ? This
>> would be a very simple yet effective way to prevent a job dominating the
>> cluster.
>> >
>> > cheers,
>> > Tom
>> >
>>
>>
>
>

Reply via email to