Hi Rami,
could you maybe provide your code? You could also send it to me directly if
you don't want to share with the community.
It might be that there is something in the way the pipeline is setup that
causes the (generated) operator UIDs to not be deterministic.
Best,
Aljoscha
On Sat, 7 Jan 20
Hi Stephan,
I have not change the parallelism nor the names or anything in my program. It
is the same exact jar file unmodified.
I have tried uid. but I faced this "UnsupportedOperationException: Cannot
assign user-specified hash to intermediate node in chain. This will be
supported in future
Hi!
Did you change the parallelism in your program, or do the names of some
functions change each time you call the program?
Can you try what happens when you give explicit IDs to operators via the
'.uid(...)' method?
Stephan
On Tue, Jan 3, 2017 at 11:44 PM, Al-Isawi Rami
wrote:
> Hi,
>
> I
Hi,
I have a flink job that I can trigger a save point for with no problem.
However, If I cancel the job then try to run it with the save point, I get the
following exception. Any ideas how can I debug or fix it? I am using the exact
same jar so I did not modify the program in any manner. Using