> (c) You have two operators with the same name that become tasks with the
same name.
Actually it was a variation on that issue.
The problem was that I was reading a dataset X which was part of both the
dynamic and the static path of a Flink iteration. I guess the duplicates
duplicates these paths
It could be that
(a) The task failed and was restarted.
(b) The program has multiple steps (collect() print()), so that parts of
the graph get re-executed.
(c) You have two operators with the same name that become tasks with the
same name.
Do any of those explanations make sense in your setting
On Tue, May 31, 2016 at 11:53 AM, Alexander Alexandrov
wrote:
> Can somebody shed a light on the execution semantics of the scheduler which
> will explain this behavior?
The execution IDs are unique per execution attempt. Having two tasks
with the same subtask index running at the same time is un