Do you mean a failed checkpoint, or do you mean that it happens after a
restore from a checkpoint? If it is the latter then this is kind of
expected, as watermarks are not checkpointed and they need to be
repopulated again.
Best,
Dawid
On 19/07/2021 07:41, Dan Hill wrote:
> After my dev flink jo
Hi!
This does not sound like an expected behavior. Could you share your code /
SQL and flink configuration so that others can help diagnose the issue?
Dan Hill 于2021年7月19日周一 下午1:41写道:
> After my dev flink job hits a checkpoint failure (e.g. timeout) and then
> has successful checkpoints, the fl
After my dev flink job hits a checkpoint failure (e.g. timeout) and then
has successful checkpoints, the flink job appears to be in a bad state.
E.g. some of the operators that previously had a watermark start showing
"no watermark". The jobs proceed very slowly.
Is there documentation for this s
Hi Dario,
out of curiosity, could you briefly describe the driving use-case? What
is the (logical) constraint, that drives the requirement? I'd guess,
that it could be related to waiting for some (external) condition? Or
maybe related to late data? I think that there might be better
approache
Hey Kiran,
Yeah was thinking of another solution, so I have one posgresql sink &
one kafka sink.
So I can just process the data in real time and insert them in the DB.
Then I would just select the latest row where created_at >= NOW() -
interval '15 minutes' and for any kafka consumer I would
Hi Dario,
Did you explore other options? If your use case (apart from delaying sink
writes) can be solved via spark streaming. Then maybe spark streaming with
a micro-batch of 15 mins would help.
On Sat, Jul 17, 2021 at 10:17 PM Dario Heinisch
wrote:
> Hey there,
>
> Hope all is well!
>
> I w