Hi all, In the mode of on yarn, a node may contain more than one container, is there a scheme for assigning tasks to different nodes.
the version is 1.4.2 Thanks for your assistance. Sayat Satybaldiyev <saya...@gmail.com> 于2018年10月26日周五 下午3:50写道: > Thanks for the advice, Klein. Could you please share more details why it's > best to allocate for each job a separate cluster? > > On Wed, Oct 24, 2018 at 3:23 PM Kien Truong <duckientru...@gmail.com> > wrote: > >> Hi, >> >> You can have multiple Flink clusters on the same set of physical >> machines. In our experience, it's best to deploy a separate Flink >> cluster for each job and adjust the resource accordingly. >> >> Best regards, >> Kien >> >> On Oct 24, 2018 at 20:17, <Sayat Satybaldiyev <saya...@gmail.com>> >> wrote: >> >> Flink Cluster in standalone with HA configuration. It has 6 Task managers >> and each has 8 slots. Overall, 48 slots for the cluster. >> >> >>If you cluster only have one task manager with one slot in each node, >> then the job should be spread evenly. >> Agree, this will solve the issue. However, the cluster is running other >> jobs and in this case it won't have hardware resource for other jobs. >> >> On Wed, Oct 24, 2018 at 2:20 PM Kien Truong <duckientru...@gmail.com> >> wrote: >> >>> Hi, >>> >>> How are your task managers deploy ? >>> >>> If you cluster only have one task manager with one slot in each node, >>> then the job should be spread evenly. >>> >>> Regards, >>> >>> Kien >>> >>> On 10/24/2018 4:35 PM, Sayat Satybaldiyev wrote: >>> > Is there any way to indicate flink not to allocate all parallel tasks >>> > on one node? We have a stateless flink job that reading from 10 >>> > partition topic and have a parallelism of 6. Flink job manager >>> > allocates all 6 parallel operators to one machine, causing all traffic >>> > from Kafka allocated to only one machine. We have a cluster of 6 nodes >>> > and ideal to spread one parallel operator to one machine. Is there a >>> > way to do than in Flink? >>> >>