With 0.8.0, I'm seeing that an initial metadata request fails, if the number of running brokers is fewer than the configured replication factor:
877 [kafka-request-handler-0] ERROR kafka.server.KafkaApis - [KafkaApi-1946108683] Error while retrieving topic metadata kafka.admin.AdministrationException: replication factor: 2 larger than available brokers: 1 at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:62) at kafka.admin.CreateTopicCommand$.createTopic(CreateTopicCommand.scala:92) at kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:409) at kafka.server.KafkaApis$$anonfun$handleTopicMetadataRequest$1.apply(KafkaApis.scala:401) at scala.collection.immutable.Set$Set1.foreach(Set.scala:81) at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:400) at kafka.server.KafkaApis.handle(KafkaApis.scala:61) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41) at java.lang.Thread.run(Thread.java:680) However, if after connecting, the number of brokers goes down, producing clients have no problems continuing sending messages, etc. So, I thought the idea was that once a replica becomes available, it will be caught up with messages it might have missed, etc. This is good because it makes doing things like rolling restarts of the brokers possible, etc. But it's a problem if a rolling restart happens at the same time a new client is coming online to try and initialize a connection. Thoughts? Shouldn't the requirements be the same for initial connections as ongoing connections? Jason