The reason two Flink jobs using a Kafka consumer with the same consumer
group are seeing the same events is that Flink's FlinkKafkaConsumer does
not participate in Kafka's consumer group management. Instead Flink
manually assigns all partitions to the source operators (on a per job
basis). The cons
hi, i am confused about consumer group of
FlinkKafkaConsumer, i have two applications,with the same
code like this:
//---
val bsEnv = StreamExecutionEnvironment.getExecutionEnvironment
Env.setRestartStrategy(RestartStrategies.noRestart())
val consumerProps =