[ https://issues.apache.org/jira/browse/KAFKA-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894736#comment-13894736 ]
Jay Kreps commented on KAFKA-1182: ---------------------------------- 1. Yes, right? Logically what does it mean to say that data is replicated 2 times if there is only one server? That is kind of the point, right? :-) 2. There is a distinction between the node count and the in-sync nodes. As per (1) you definitely shouldn't be able to create a replication factor of X if your node count is less than X. But it could be possible to create a topic and assign some replicas to down nodes, I'm just not really sure of the utility of doing this. 3. I don't understand what you are saying here. > Topic not created if number of live brokers less than # replicas > ---------------------------------------------------------------- > > Key: KAFKA-1182 > URL: https://issues.apache.org/jira/browse/KAFKA-1182 > Project: Kafka > Issue Type: Improvement > Components: producer > Affects Versions: 0.8.0 > Environment: Centos 6.3 > Reporter: Hanish Bansal > Assignee: Jun Rao > > We are having kafka cluster of 2 nodes. (Using Kafka 0.8.0 version) > Replication Factor: 2 > Number of partitions: 2 > Actual Behaviour: > Out of two nodes, if any one node goes down then topic is not created in > kafka. > Steps to Reproduce: > 1. Create a 2 node kafka cluster with replication factor 2 > 2. Start the Kafka cluster > 3. Kill any one node > 4. Start the producer to write on a new topic > 5. Observe the exception stated below: > 2013-12-12 19:37:19 0 [WARN ] ClientUtils$ - Fetching topic metadata with > correlation id 3 for topics [Set(test-topic)] from broker > [id:0,host:122.98.12.11,port:9092] failed > java.net.ConnectException: Connection refused > at sun.nio.ch.Net.connect(Native Method) > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500) > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) > at kafka.producer.SyncProducer.connect(SyncProducer.scala:146) > at > kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161) > at > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68) > at kafka.producer.SyncProducer.send(SyncProducer.scala:112) > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53) > at > kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) > at > kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49) > at > kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:186) > at > kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150) > at > kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:149) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43) > at > kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:149) > at > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:95) > at > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) > at kafka.producer.Producer.send(Producer.scala:76) > at kafka.javaapi.producer.Producer.send(Producer.scala:33) > Expected Behaviour: > In case of live brokers less than # replicas: > There should be topic created so at least live brokers can receive the data. > They can replicate data to other broker once any down broker comes up. > Because now in case of live brokers less than # replicas, there is complete > loss of data. -- This message was sent by Atlassian JIRA (v6.1.5#6160)