Hi Juho, Can you confirm that the new partition is consumed, but only that Flink’s reported metrics do not include them? If yes, then I think your observations can be explained by this issue: https://issues.apache.org/jira/browse/FLINK-8419
This issue should have been fixed in the recently released 1.4.2 version. Cheers, Gordon On 22 March 2018 at 8:02:40 PM, Juho Autio (juho.au...@rovio.com) wrote: According to the docs*, flink.partition-discovery.interval-millis can be set to enable automatic partition discovery. I'm testing this, apparently it doesn't work. I'm using Flink Version: 1.5-SNAPSHOT Commit: 8395508 and FlinkKafkaConsumer010. I had my flink stream running, consuming an existing topic with 3 partitions, among some other topics. I modified partitions of an existing topic: 3 -> 4**. I checked consumer offsets by secor: it's now consuming all 4 partitions. I checked consumer offset by my flink stream: it's still consuming only the 3 original partitions. I also checked the Task Metrics of this job from Flink UI and it only offers Kafka related metrics to be added for 3 partitions (0,1 & 2). According to Flink UI > Job Manager > Configuration: flink.partition-discovery.interval-millis=60000 – so that's just 1 minute. It's already more than 20 minutes since I added the new partition, so Flink should've picked it up. How to debug? Btw, this job has external checkpoints enabled, done once per minute. Those are also succeeding. *) https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-topic-and-partition-discovery **) ~/kafka$ bin/kafka-topics.sh --zookeeper $ZOOKEEPER_HOST --describe --topic my_topic Topic:my_topic PartitionCount:3 ReplicationFactor:1 Configs: ~/kafka$ bin/kafka-topics.sh --zookeeper $ZOOKEEPER_HOST --alter --topic my_topic --partitions 4 Adding partitions succeeded!