liuyaolin opened a new issue #8079: URL: https://github.com/apache/incubator-doris/issues/8079
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/incubator-doris/issues?q=is%3Aissue) and found no similar issues. ### Version doris-flink-connector-1.11.6-2.12-1.0.0-SNAPSHOT ### What's Wrong? flink doris sink是批量写入,每个task上启动一个定时任务,定时批量flush数据到doris,假如用户设置条数为1000条时,才flush,当写入500条时,程序正好做checkpoint成功了,kafka的offset做了相应的commit,后面500条过来后,未做checkpoint,但此时doris服务器出了问题,导致flush报错,flink的task重启,重新从上次checkpoint commit的offset消费kafka,导致前面500条数据出现丢失情况 ### What You Expected? 在checkpoint时,也就是在GenericDorisSinkFunction类的initializeState方法中执行一次flush,防止数据丢失 ### How to Reproduce? _No response_ ### Anything Else? _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org