Hi everybody. We're currently experimenting the same characteristic on
flink-1.6.2.

I've been reading that Flink treats all the slot as equals, it doesn't even
know where these slots reside
https://stackoverflow.com/questions/54980104/uneven-assignment-of-tasks-to-workers-in-flink.

So it should not be an issue; thus, the fact that it runs all the slots of
a machine before moving to a new one should be just a rough coincidence.

Given that, I'm pretty sure that I've never been recording this feature
using previous majors (I recall flink-1.3 for sure).
Moreover, this is damaging because you can get resources exhausted (e.g.
memory, disk).

Hope we might find a solution on this.
Sincerely,

Andrea


Il giorno lun 18 mar 2019 alle ore 11:53 Kumar Bolar, Harshith <
hk...@arity.com> ha scritto:

> Hi all,
>
>
>
> We're running a Flink on a five node standalone cluster with three task
> manager (TM1, TM2, TM3) and two job managers.
>
>
>
> Whenever I submit a new job, the job gets deployed on only TM3. When the
> number of slots in TM3 get exhausted, the jobs start getting deployed on
> TM2 and so on. How do I ensure that the jobs get distributed evenly across
> all 3 task managers?
>
>
>
> Thanks,
>
> Harshith
>
>
>


-- 
*Andrea Spina*
Software Engineer @ Radicalbit Srl
Via Borsieri 41, 20159, Milano - IT

Reply via email to