> this means that I am loosing data or if this will be retried by the sink?
I don't have direct experience with KafkaIO, but noting that this exception
happened in the finishBundle method, Beam will not have committed the
bundle.
More specifically, looking at the KafkaWriter code, I see that fin
Hi!
I sometimes get the following error in one of my streaming pipelines that
use KafkaIO as sink:
java.io.IOException: KafkaWriter : failed to send 1 records (since last report)
org.apache.beam.sdk.io.kafka.KafkaWriter.checkForFailures(KafkaWriter.java:120)
org.apache.beam.sdk.i