Something strange happened today.
When we tried to shutdown a job with a savepoint, the watermarks became
equal to 2^63 - 1.

This caused timers to fire indefinitely and crash downstream systems with
overloaded untrue data.

We are using event time processing with Kafka as our source.

It seems impossible for a watermark to be that large.

I know its possible stream with a batch execution mode.  But this was
stream processing.

What can cause this?  Is this normal behavior when creating a savepoint?

Reply via email to