Hi, Hatem. 
I think there is no way to catch the exception and then ignore it in current 
implementation for KafkaSink. You may also need to extend the KafkaSink. 

Best regards, 
Yuxia 


发件人: "Hatem Mostafa" <m...@hatem.co> 
收件人: "User" <user@flink.apache.org> 
发送时间: 星期四, 2023年 2 月 16日 下午 9:32:44 
主题: KafkaSink handling message size produce errors 

Hello, 
I am writing a flink job that reads and writes into kafka, it is using a window 
operator and eventually writing the result of the window into a kafka topic. 
The accumulated data can exceed the maximum message size after compression on 
the producer level. I want to be able to catch the exception coming from the 
producer and ignore this window. I could not find a way to do that in [ 
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#kafka-sink
 | KafkaSink ] , is there a way to do so? 

I attached here an example of an error that I would like to handle gracefully. 




This question is similar to one that was asked on stackoverflow [ 
https://stackoverflow.com/questions/52308911/how-to-handle-exceptions-in-kafka-sink
 | here ] but the answer is relevant for older versions of flink. 

Regards, 
Hatem 

Reply via email to