Hi everyone,
>From the mailing list, I see this question asked a lot. But I can't seem to
find a solution to my problem. I would appreciate some help.
The requirement for our project is that we do not lose data, and not
produce duplicate records. Our pipelines are written with Apache Beam
(2.35.0
Hi James,
I literally just went through what you're doing at my job. While I'm using
Apache Beam and not the Flink api directly, the concepts still apply.
TL;DR: it works as expected.
What I did is I set up a kafka topic listener that always throws an
exception if the last received message's time
Please email user-unsubscr...@flink.apache.org as describe here:
https://flink.apache.org/community.html
On Sun, Jan 15, 2023 at 8:55 AM jay green wrote:
> unsubscribe
>
Please email user-unsubscr...@flink.apache.org as described here:
https://flink.apache.org/community.html
On Sun, Jan 15, 2023 at 4:31 AM Saver Chia wrote:
> unsubscribe
>