Hi,

I am using Flink 1.4 along with Kafka 0.11. My stream job has 4 Kafka
consumers each subscribing to 4 different topics. The stream from each
consumer gets processed in 3 to 4 different ways there by writing to a
total of 12 sinks (cassandra tables). When the job runs, up to 8 or 10
records get processed correctly and after that they are not subscribed by
the consumers. I have tried this with 'flink 1.3.2 and kafka 0.10' and
'flink 1.4 and kafka 0.10' all of which gave the same results.

Reply via email to