Hi Lars,
That sounds like a painful process. Since the offsets are inconsistent, I
would suggest to reset the Kafka source state by changing the `uid`, set
the source to start from earliest if you haven't already, make the
bootstrap server change, and restart your job with allowNonRestoredState
en
I was wrong about this. The AS OF style processing join has been disabled
at a higher level,
in
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecTemporalJoin#createJoinOperator
David
On Thu, Oct 6, 2022 at 9:59 AM David Anderson wrote:
> Salva,
>
> Have you tried doing an AS OF
Hello,
What is the recommended approach for migrating flink jobs to a new kafka
server? I was naively hoping to use Kafka Mirror Maker to sync the old
server with the new server, and simply continue from savepoint with updated
URL's. Unfortunately, the kafka offsets are not identical for log compa
Hello all.
I can’t understand the floating problem, sometimes checkpoints stop passing,
sometimes they start to complete every other time.
Flink 1.14.4 in kubernetes application mode.
2022-10-06 09:08:04,731 INFO
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering
che
As for your original question, the documentation states that a temporal
table function can only be registered via the Table API, and I believe this
is true.
David
On Thu, Oct 6, 2022 at 9:59 AM David Anderson wrote:
> Salva,
>
> Have you tried doing an AS OF style processing time temporal join?
Salva,
Have you tried doing an AS OF style processing time temporal join? I know
the documentation leads one to believe this isn't supported, but I think it
actually works. I'm basing this on this comment [1] in the code for
the TemporalProcessTimeJoinOperator:
The operator to temporal join a str