Hi using Apache Flink 1.8.0 I'm consuming events from Kafka using nothing fancy...
Properties props = new Properties(); props.setProperty("bootstrap.servers", kafkaAddress); props.setProperty("group.id",kafkaGroup); FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<>(topic, new SimpleStringSchema(),props); Do some JSON transforms and then push to my SQL database using JDBC and stored procedure. Let's assume the SQL sink fails. We know that Kafka can either periodically commit offsets or it can be done manually based on consumers logic. - How is the source Kafka consumer offsets handled? - Does the Flink Kafka consumer commit the offset to per event/record? - Will that single event that failed be retried? - So if we had 5 incoming events and say on the 3rd one it failed, will it continue on the 3rd or will the job restart and try those 5 events.