Your assumption is correct. However I was under impression that shutdown should 
do rebalancing, was i wrong? What is the proper way to shutdown and allow 
partition to be serve by other brokers? What is the process to recover from 
this? How to avoid this and still have ability to remove servers from tier?

Sent from my iPad

On Aug 6, 2013, at 10:55 PM, Tejas Patil <tejas.patil...@gmail.com> wrote:

> I assume that "We had 6 kafka in the tier" means that you had 6 kafka
> brokers.
> 
> About the exception that you see: I think that the 3 brokers you took down
> were having the data for [junit2_analytics_data_log,0] and no other live
> broker has the data for [junit2_analytics_data_log,0].
> 
> You could run this command to see the details about the partition
> assignment for that topic:
> bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic
> junit2_analytics_data_log
> 
> 
> On Tue, Aug 6, 2013 at 10:08 PM, Vadim Keylis <vkeylis2...@gmail.com> wrote:
> 
>> We are using kafka08 beta1. We had 6 kafka in the tier. I have replication
>> set to 3. Then 3 servers were removed using control shutdown method. I am
>> getting the  error bellow after that.
>> What went wrong during shutdown? How to recover from the error? What steps
>> to take in order to avoid in the future?
>> 
>> Thanks so much in advance.
>> 
>> 
>> [2013-08-06 21:56:46,044] ERROR [KafkaApi-7] Error while fetching metadata
>> for partition [junit2_analytics_data_log,0] (kafka.server.KafkaApis)
>> kafka.common.LeaderNotAvailableException: Leader not available for
>> partition [junit2_analytics_data_log,0]
>>        at
>> kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:468)
>>        at
>> kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:456)
>>        at
>> 
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>>        at
>> 
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>>        at
>> 
>> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>>        at scala.collection.immutable.List.foreach(List.scala:76)
>>        at
>> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
>>        at scala.collection.immutable.List.map(List.scala:76)
>>        at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:456)
>>        at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:452)
>>        at
>> 
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>>        at
>> 
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>>        at
>> scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:130)
>>        at
>> scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
>>        at
>> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
>>        at
>> 
>> scala.collection.immutable.HashSet.scala$collection$SetLike$$super$map(HashSet.scala:33)
>>        at scala.collection.SetLike$class.map(SetLike.scala:93)
>>        at scala.collection.immutable.HashSet.map(HashSet.scala:33)
>>        at
>> kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:452)
>>        at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
>>        at
>> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
>>        at java.lang.Thread.run(Thread.java:662)
>> 

Reply via email to