For now, you can still add servers, but only newly created topics will go there. If you just remove a server, you will be down 1 replica. What you can do is to replace a server with a new one by keeping the same broker id.
To recover from your error: (1) bring the 3 old brokers back up; (2) bring one down and start a new broker with the same broker id; (3) wait until it's fully caught up (underreplicated count drops to 0); (4) repeat the above for the rest of the 2 brokers. Thanks, Jun On Wed, Aug 7, 2013 at 7:50 AM, Vadim Keylis <vkeylis2...@gmail.com> wrote: > Jun, > > What the process now if we want to add and remove servers? How can I > recover from error in mean time? > When is the final release? > > Thanks, > Vadim > > > On Wed, Aug 7, 2013 at 7:37 AM, Jun Rao <jun...@gmail.com> wrote: > > > We do have a tool ReassignPartitionsCommand that allows you to move data > > from one broker to another. It's still being tested and improved. It will > > be complete in the 0.8 final release. > > > > Thanks, > > > > Jun > > > > > > On Tue, Aug 6, 2013 at 11:18 PM, Vadim Keylis <vkeylis2...@gmail.com> > > wrote: > > > > > Your assumption is correct. However I was under impression that > shutdown > > > should do rebalancing, was i wrong? What is the proper way to shutdown > > and > > > allow partition to be serve by other brokers? What is the process to > > > recover from this? How to avoid this and still have ability to remove > > > servers from tier? > > > > > > Sent from my iPad > > > > > > On Aug 6, 2013, at 10:55 PM, Tejas Patil <tejas.patil...@gmail.com> > > wrote: > > > > > > > I assume that "We had 6 kafka in the tier" means that you had 6 kafka > > > > brokers. > > > > > > > > About the exception that you see: I think that the 3 brokers you took > > > down > > > > were having the data for [junit2_analytics_data_log,0] and no other > > live > > > > broker has the data for [junit2_analytics_data_log,0]. > > > > > > > > You could run this command to see the details about the partition > > > > assignment for that topic: > > > > bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic > > > > junit2_analytics_data_log > > > > > > > > > > > > On Tue, Aug 6, 2013 at 10:08 PM, Vadim Keylis <vkeylis2...@gmail.com > > > > > wrote: > > > > > > > >> We are using kafka08 beta1. We had 6 kafka in the tier. I have > > > replication > > > >> set to 3. Then 3 servers were removed using control shutdown > method. I > > > am > > > >> getting the error bellow after that. > > > >> What went wrong during shutdown? How to recover from the error? What > > > steps > > > >> to take in order to avoid in the future? > > > >> > > > >> Thanks so much in advance. > > > >> > > > >> > > > >> [2013-08-06 21:56:46,044] ERROR [KafkaApi-7] Error while fetching > > > metadata > > > >> for partition [junit2_analytics_data_log,0] (kafka.server.KafkaApis) > > > >> kafka.common.LeaderNotAvailableException: Leader not available for > > > >> partition [junit2_analytics_data_log,0] > > > >> at > > > >> > > > > kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:468) > > > >> at > > > >> > > > > kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:456) > > > >> at > > > >> > > > >> > > > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233) > > > >> at > > > >> > > > >> > > > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233) > > > >> at > > > >> > > > >> > > > > > > scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59) > > > >> at scala.collection.immutable.List.foreach(List.scala:76) > > > >> at > > > >> > scala.collection.TraversableLike$class.map(TraversableLike.scala:233) > > > >> at scala.collection.immutable.List.map(List.scala:76) > > > >> at > > kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:456) > > > >> at > > kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:452) > > > >> at > > > >> > > > >> > > > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233) > > > >> at > > > >> > > > >> > > > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233) > > > >> at > > > >> > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:130) > > > >> at > > > >> > > > > scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275) > > > >> at > > > >> > scala.collection.TraversableLike$class.map(TraversableLike.scala:233) > > > >> at > > > >> > > > >> > > > > > > scala.collection.immutable.HashSet.scala$collection$SetLike$$super$map(HashSet.scala:33) > > > >> at scala.collection.SetLike$class.map(SetLike.scala:93) > > > >> at scala.collection.immutable.HashSet.map(HashSet.scala:33) > > > >> at > > > >> > kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:452) > > > >> at kafka.server.KafkaApis.handle(KafkaApis.scala:69) > > > >> at > > > >> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42) > > > >> at java.lang.Thread.run(Thread.java:662) > > > >> > > > > > >