Hi Chengzhi,
Yes, generally speaking, you would launch a separated job to do the
backfilling, and then shut down the job after the backfilling is completed.
For this to work, you’ll also have to keep in mind that writes to the external
sink must be idempotent.
Are you using Kafka as the data so
Hey, flink community,
I have a question on backfill data and want to get some ideas on how people
think.
I have a stream of data using BucketingSink to S3 then to Redshift. If
something changed with the logic in flink and I need to backfill some
dates, for example, we are streaming data for today