Hi,

I'm testing kafka 0.8.0 failover.

I have 5 brokers 1,2,3,4,5. I shutdown 5 (with controlled shutdown
activated).
broker 4 is my bootstrap broker.

My config has: default.replication.factor=2, num.partitions=8.

When I look at the kafka server.log on broker 4 I get the below error,
which only goes away when I restart broker 5.


[2014-01-29 04:12:15,348] ERROR [KafkaApi-4] Error while fetching metadata
for partition [data,4] (kafka.server.KafkaApis)
kafka.common.LeaderNotAvailableException: Leader not available for
partition [data,4]
at kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:468)
at kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:456)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at
scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61)
at scala.collection.immutable.List.foreach(List.scala:45)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
at scala.collection.immutable.List.map(List.scala:45)
at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:456)
at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:452)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:123)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:322)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:322)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
at scala.collection.immutable.HashSet.map(HashSet.scala:32)
at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:452)
at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
at java.lang.Thread.run(Thread.java:724)


any ideas?

Reply via email to