[ https://issues.apache.org/jira/browse/KAFKA-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894814#comment-13894814 ]
Hanish Bansal commented on KAFKA-1182: -------------------------------------- @Jay: 3rd point is that kafka provides n-1 failure support where n is replication factor It means data will be not lost if n-1 nodes goes down. But as here if there are number of live nodes is less than 'n' while creating the topic then there will be complete data loss that will be un-acceptable. I agree with Clark and Todd. As described by Todd also that anyone will like to take degraded state instead of complete loss of data. > Topic not created if number of live brokers less than # replicas > ---------------------------------------------------------------- > > Key: KAFKA-1182 > URL: https://issues.apache.org/jira/browse/KAFKA-1182 > Project: Kafka > Issue Type: Improvement > Components: producer > Affects Versions: 0.8.0 > Environment: Centos 6.3 > Reporter: Hanish Bansal > Assignee: Jun Rao > > We are having kafka cluster of 2 nodes. (Using Kafka 0.8.0 version) > Replication Factor: 2 > Number of partitions: 2 > Actual Behaviour: > Out of two nodes, if any one node goes down then topic is not created in > kafka. > Steps to Reproduce: > 1. Create a 2 node kafka cluster with replication factor 2 > 2. Start the Kafka cluster > 3. Kill any one node > 4. Start the producer to write on a new topic > 5. Observe the exception stated below: > 2013-12-12 19:37:19 0 [WARN ] ClientUtils$ - Fetching topic metadata with > correlation id 3 for topics [Set(test-topic)] from broker > [id:0,host:122.98.12.11,port:9092] failed > java.net.ConnectException: Connection refused > at sun.nio.ch.Net.connect(Native Method) > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500) > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) > at kafka.producer.SyncProducer.connect(SyncProducer.scala:146) > at > kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161) > at > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68) > at kafka.producer.SyncProducer.send(SyncProducer.scala:112) > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53) > at > kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) > at > kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49) > at > kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:186) > at > kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150) > at > kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:149) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43) > at > kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:149) > at > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:95) > at > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) > at kafka.producer.Producer.send(Producer.scala:76) > at kafka.javaapi.producer.Producer.send(Producer.scala:33) > Expected Behaviour: > In case of live brokers less than # replicas: > There should be topic created so at least live brokers can receive the data. > They can replicate data to other broker once any down broker comes up. > Because now in case of live brokers less than # replicas, there is complete > loss of data. -- This message was sent by Atlassian JIRA (v6.1.5#6160)