Thanks for your reply, Till
We will use without Marathon, and hope the PR is merged to latest version soon.
 
Best regards,
Bo

> On Oct 9, 29 Heisei, at 6:36 PM, Till Rohrmann <trohrm...@apache.org> wrote:
> 
> Hi Bo,
> 
> Flink uses internally Fenzo to match tasks and offers. Fenzo does not support 
> the Marathon constraints syntax you are referring to. At the moment, Flink 
> only allows to define hard host attribute constraints which means that you 
> define a host attribute which has to match exactly. Fenzo also supports 
> constraints that work on a set of tasks [1], but this is not yet exposed to 
> the user. With that you should be able to evenly spread your tasks across 
> multiple machines.
> 
> There is actually a PR [2] trying to add this functionality. However, it is 
> not yet in the shape to be merged.
> 
> [1] 
> https://github.com/Netflix/Fenzo/wiki/Constraints#constraints-that-operate-on-groups-of-tasks
>  
> <https://github.com/Netflix/Fenzo/wiki/Constraints#constraints-that-operate-on-groups-of-tasks>
> [2] https://github.com/apache/flink/pull/4628 
> <https://github.com/apache/flink/pull/4628>
> 
> Cheers,
> Till
> 
> On Fri, Oct 6, 2017 at 10:54 AM, Tzu-Li (Gordon) Tai <tzuli...@apache.org 
> <mailto:tzuli...@apache.org>> wrote:
> Hi Bo,
> 
> I'm not familiar with Mesos deployments, but I'll forward this to Till or 
> Eron (in CC) who perhaps could provide some help here.
> 
> Cheers,
> Gordon
> 
> 
> On 2 October 2017 at 8:49:32 PM, Bo Yu (yubo1...@gmail.com 
> <mailto:yubo1...@gmail.com>) wrote:
> 
>> Hello all,
>> This is Bo, I met some problems when I tried to use flink in my mesos 
>> cluster (1 master, 2 slaves (cpu has 32 cores)).
>> I tried to start the mesos-appmaster.sh in marathon, the job manager is 
>> started without problem.
>> 
>> mesos-appmaster.sh -Djobmanager.heap.mb=1024 -Dtaskmanager.heap.mb=1024 
>> -Dtaskmanager.numberOfTaskSlots=32
>> 
>> My problem is the task managers are all located in one single slave.
>> 1. (log1)
>> The initial tasks in "/usr/local/flink/conf/flink-conf.yaml" is setted as 
>> "mesos.initial-tasks: 2"
>> And also set the "mesos.constraints.hard.hostattribute: rack:ak09-27", which 
>> is the master node of mesos cluster.
>> 
>> 2. (log2)
>> I tried many ways to distribute the tasks to all the available slaves, and 
>> without any success.
>> So I decide to try add a group_by operator which I referenced from 
>> https://mesosphere.github.io/marathon/docs/constraints.html 
>> <https://mesosphere.github.io/marathon/docs/constraints.html>
>> "mesos.constraints.hard.hostattribute: rack:ak09-27,GROUP_BY:2"
>> According to the log, flink keep waiting for more offers and the tasks never 
>> been launched.
>> 
>> Sorry, I am a newbie to flink, also on mesos. Please reply if my problem is 
>> not clear, and I will be appreciate on any hint about how to distribute task 
>> evenly on available resources.
>> 
>> Thank you in advance.
>> 
>> Best regards,
>> 
>> Bo
>> 
> 

Reply via email to