Thanks Jingsong and Kurt for more details.
Yes, I'm planning to try out DeDuplication when I'm done upgrading to
version 1.9. Hopefully deduplication is done by only one task and reused
everywhere else.
One more follow-up question, I see "For production use cases, we recommend
the old planner tha
Hey folks, we've been running a k8 flink application, using the
taskmanager.sh script and passing in the -Djobmanager.heap.size=9000m and
-Dtaskmanager.heap.size=7000m as options to the script. I noticed from the
logs, that the Maximum heap size logged completely ignores these arguments,
and just s
> In kafka010, ConsumerRecord has a field named timestamp. It is
encapsulated in Kafka010Fetcher.
> How can I get the timestamp when I write a flink job?
Kafka010Fetcher puts the timestamps into the StreamRecords that wrap your
events. If you want to access these timestamps, you can use a
ProcessF
In kafka010, ConsumerRecord has a field named timestamp. It is
encapsulated
in Kafka010Fetcher. How can I get the timestamp when I write a flink job?
Thank you very much.
Hi,
I tried running a stop with savepoint on my Flink job, which includes
numerous Flink SQL streams. While running the command I used `-d` to drain
all streams with MAX_WATERMARK.
Looking at the Flink UI, all sources successfully finished, yet all Flink
SQL streams were in a Running state, and n