Hi Sayat,
at the moment it is not possible to control the scheduling behaviour of
Flink. In the future, we plan to add some kind of hints which controls
whether tasks of a job get spread out or will be packed on as few nodes as
possible.
Cheers,
Till
On Fri, Oct 26, 2018 at 2:06 PM Kien Truong
Hi,
There are couple of reasons:
- Easier resource allocation and isolation: one faulty job doesn't
affect another.
- Mix and match of Flink version: you can leave the old stable jobs run
with the old Flink version, and use the latest version of Flink for new
jobs.
- Faster metrics collec
Hi all,
In the mode of on yarn, a node may contain more than one container, is
there a scheme for assigning tasks to different nodes.
the version is 1.4.2
Thanks for your assistance.
Sayat Satybaldiyev 于2018年10月26日周五 下午3:50写道:
> Thanks for the advice, Klein. Could you please share more detai
Thanks for the advice, Klein. Could you please share more details why it's
best to allocate for each job a separate cluster?
On Wed, Oct 24, 2018 at 3:23 PM Kien Truong wrote:
> Hi,
>
> You can have multiple Flink clusters on the same set of physical
> machines. In our experience, it's best to d
Hi,
You can have multiple Flink clusters on the same set of physical machines. In
our experience, it's best to deploy a separate Flink cluster for each job
and adjust the resource accordingly.
Best regards,
Kien
On Oct 24, 2018 at 20:17, > wrote:
Flink Cluster in standalone with HA configuratio
Flink Cluster in standalone with HA configuration. It has 6 Task managers
and each has 8 slots. Overall, 48 slots for the cluster.
>>If you cluster only have one task manager with one slot in each node,
then the job should be spread evenly.
Agree, this will solve the issue. However, the cluster is
Hi,
How are your task managers deploy ?
If you cluster only have one task manager with one slot in each node,
then the job should be spread evenly.
Regards,
Kien
On 10/24/2018 4:35 PM, Sayat Satybaldiyev wrote:
Is there any way to indicate flink not to allocate all parallel tasks
on one no
Is there any way to indicate flink not to allocate all parallel tasks on
one node? We have a stateless flink job that reading from 10 partition
topic and have a parallelism of 6. Flink job manager allocates all 6
parallel operators to one machine, causing all traffic from Kafka allocated
to only o