Hi Robert,

As expected with exactly-once guarantees, a record that caused a Flink job
to fail will be attempted to be reprocessed on the restart of the job.

For some specific "corrupt" record that causes the job to fall into a
fail-and-restart loop, there is a way to let the Kafka consumer skip that
specific "corrupt" record. To do that, return null when attempting to
deserialize the corrupted record (specifically, that would be the
`deserialize` method on the provided `DeserializationSchema`).

Cheers,
Gordon



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Reply via email to