The problem is indicated by the following log in broker 1's controller log. Were you deleting any topic at that time?
[2014-05-12 21:24:37,930] ERROR [BrokerChangeListener on Controller 1]: Error while handling broker changes (kafka.controller.ReplicaStateMachine$BrokerChangeListener) java.util.NoSuchElementException: key not found: [ocp.fillhealth.us-east-1,0] at scala.collection.MapLike$class.default(MapLike.scala:225) at scala.collection.mutable.HashMap.default(HashMap.scala:45) at scala.collection.MapLike$class.apply(MapLike.scala:135) at scala.collection.mutable.HashMap.apply(HashMap.scala:45) at kafka.controller.ControllerBrokerRequestBatch.updateMetadataRequestMapFor$1(ControllerChannelManager.scala:242) at kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$1.apply(ControllerChannelManager.scala:268) at kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$1.apply(ControllerChannelManager.scala:268) at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:99) at scala.collection.Iterator$class.foreach(Iterator.scala:772) at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45) at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:99) at kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:268) at kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:940) at kafka.controller.KafkaController.onBrokerStartup(KafkaController.scala:386) at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:342) at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:330) at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:330) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:329) at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:328) at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:328) at kafka.utils.Utils$.inLock(Utils.scala:538) at kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:327) at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:568) at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) Thanks, Jun On Tue, May 13, 2014 at 9:50 AM, Steven Wu <steve...@netflix.com> wrote: > sorry for the wrong log file. please see the attached zip file for all 4 > log files. > > > On Mon, May 12, 2014 at 9:05 PM, Jun Rao <jun...@gmail.com> wrote: > >> The controller log in broker 1 is too late. Could you send its log >> around 2014-05-12 21:24:37? >> >> Thanks, >> >> Jun >> >> >> On Mon, May 12, 2014 at 5:02 PM, Steven Wu <steve...@netflix.com> wrote: >> >> > This is a three-node cluster. broker 0 lost connection to ZK. broker 1 >> > does seem to take the controller role. but broker 0 stuck in the bad >> state >> > and wasn't able to recover. >> > >> > it seems to start with these error msgs. I have attached complete >> > controller and server log for broker 0 and 1. >> > >> > I am using "kafka_2.9.2-0.8.1.1.jar". >> > >> > Thanks, >> > Steven >> > >> > [2014-05-12 21:24:28,737] INFO Client session timed out, have not heard >> > from server in 4000ms for sessionid 0xb145a9585a806013, closing socket >> > connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:28,838] INFO zookeeper state changed (Disconnected) >> > (org.I0Itec.zkclient.ZkClient) >> > [2014-05-12 21:24:29,360] INFO Opening socket connection to server >> > ip-10-84-58-49.ec2.internal/10.84.58.49:2181. Will not attempt to >> > authenticate using SASL (unknown error) >> (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:30,562] INFO Client session timed out, have not heard >> > from server in 1724ms for sessionid 0xb145a9585a806013, closing socket >> > connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:30,706] INFO Opening socket connection to server >> > ip-10-45-171-150.ec2.internal/10.45.171.150:2181. Will not attempt to >> > authenticate using SASL (unknown error) >> (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:31,907] INFO Client session timed out, have not heard >> > from server in 1245ms for sessionid 0xb145a9585a806013, closing socket >> > connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:32,927] INFO Opening socket connection to server >> > ip-10-45-172-197.ec2.internal/10.45.172.197:2181. Will not attempt to >> > authenticate using SASL (unknown error) >> (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:34,128] INFO Client session timed out, have not heard >> > from server in 2120ms for sessionid 0xb145a9585a806013, closing socket >> > connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:34,380] INFO Opening socket connection to server >> > ip-10-33-144-217.ec2.internal/10.33.144.217:2181. Will not attempt to >> > authenticate using SASL (unknown error) >> (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:35,581] INFO Client session timed out, have not heard >> > from server in 1353ms for sessionid 0xb145a9585a806013, closing socket >> > connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:36,033] INFO Closing socket connection to / >> 10.60.94.175. >> > (kafka.network.Processor) >> > [2014-05-12 21:24:37,282] INFO Opening socket connection to server >> > ip-10-78-235-244.ec2.internal/10.78.235.244:2181. Will not attempt to >> > authenticate using SASL (unknown error) >> (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:37,284] INFO Socket connection established to >> > ip-10-78-235-244.ec2.internal/10.78.235.244:2181, initiating session >> > (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:37,288] INFO zookeeper state changed (Expired) >> > (org.I0Itec.zkclient.ZkClient) >> > [2014-05-12 21:24:37,288] INFO Unable to reconnect to ZooKeeper service, >> > session 0xb145a9585a806013 has expired, closing socket connection >> > (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:37,288] INFO Initiating client connection, >> connectString= >> > ec2-50-19-255-1.compute-1.amazonaws.com:2181, >> > ec2-54-235-159-245.compute-1.amazonaws.com:2181, >> > ec2-50-19-255-97.compute-1.amazonaws.com:2181, >> > ec2-184-73-152-248.compute-1.amazonaws.com:2181, >> > >> ec2-50-17-247-179.compute-1.amazonaws.com:2181/kafkabroker/defaultsessionTimeout=6000watcher=org.I0Itec.zkclient.ZkClient@6f2a4d2f >> (org.apache.zookeeper.ZooKeeper) >> > [2014-05-12 21:24:37,305] INFO EventThread shut down >> > (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:37,311] INFO Opening socket connection to server >> > ip-10-171-10-136.ec2.internal/10.171.10.136:2181. Will not attempt to >> > authenticate using SASL (unknown error) >> (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:37,311] INFO Socket connection established to >> > ip-10-171-10-136.ec2.internal/10.171.10.136:2181, initiating session >> > (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:37,316] INFO Session establishment complete on server >> > ip-10-171-10-136.ec2.internal/10.171.10.136:2181, sessionid = >> > 0xb445f1c68e5c013d, negotiated timeout = 6000 >> > (org.apache.zookeeper.ClientCnxn) >> > [2014-05-12 21:24:37,316] INFO zookeeper state changed (SyncConnected) >> > (org.I0Itec.zkclient.ZkClient) >> > [2014-05-12 21:24:37,335] INFO conflict in /controller data: >> > {"version":1,"brokerid":0,"timestamp":"1399929877328"} stored data: >> > {"version":1,"brokerid":1,"timestamp":"1399321265926"} (kafka.utils.: >> > [2014-05-12 21:24:37,348] INFO re-registering broker info in ZK for >> broker >> > 0 (kafka.server.KafkaHealthcheck) >> > [2014-05-12 21:24:37,352] INFO Registered broker 0 at path >> /brokers/ids/0 >> > with address ec2-23-20-93-219.compute-1.amazonaws.com:7101. >> > (kafka.utils.ZkUtils$) >> > [2014-05-12 21:24:37,363] INFO done re-registering broker >> > (kafka.server.KafkaHealthcheck) >> > [2014-05-12 21:24:37,372] INFO Subscribing to /brokers/topics path to >> > watch for new topics (kafka.server.KafkaHealthcheck) >> > [2014-05-12 21:24:37,382] INFO New leader is 1 >> > (kafka.server.ZookeeperLeaderElector$LeaderChangeListener) >> > [2014-05-12 21:24:38,791] ERROR Closing socket for /10.165.36.77because >> > of error (kafka.network.Processor) >> > java.io.IOException: Connection reset by peer >> > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) >> > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) >> > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) >> > at sun.nio.ch.IOUtil.read(IOUtil.java:197) >> > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) >> > at kafka.utils.Utils$.read(Utils.scala:375) >> > at >> > >> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54) >> > at kafka.network.Processor.read(SocketServer.scala:347) >> > at kafka.network.Processor.run(SocketServer.scala:245) >> > at java.lang.Thread.run(Thread.java:724) >> > [2014-05-12 21:24:55,939] INFO Partition [trace_annotation,4] on broker >> 0: >> > Shrinking ISR for partition [trace_annotation,4] from 0,2 to 0 >> > (kafka.cluster.Partition) >> > [2014-05-12 21:24:55,946] ERROR Conditional update of path >> > /brokers/topics/trace_annotation/partitions/4/state with data >> > >> {"controller_epoch":10,"leader":0,"version":1,"leader_epoch":22,"isr":[0]} >> > and expected version 76 failed due to >> > org.apache.zookeeper.KeeperException$BadVersionException: >> KeeperErrorCode = >> > BadVersion for /brokers/topics/trace_annotation/partitions/4/state >> > (kafka.utils.ZkUtils$) >> > [2014-05-12 21:24:55,958] INFO Partition [trace_annotation,4] on broker >> 0: >> > Cached zkVersion [76] not equal to that in zookeeper, skip updating ISR >> > (kafka.cluster.Partition) >> > [2014-05-12 21:24:55,959] INFO Partition [merchimpressions_client_log,2] >> > on broker 0: Shrinking ISR for partition [merchimpressions_client_log,2] >> > from 0,2 to 0 (kafka.cluster.Partition) >> > [2014-05-12 21:24:55,963] ERROR Conditional update of path >> > /brokers/topics/merchimpressions_client_log/partitions/2/state with data >> > >> {"controller_epoch":10,"leader":0,"version":1,"leader_epoch":5,"isr":[0]} >> > and expected version 14 failed due to >> > org.apache.zookeeper.KeeperException$BadVersionException: >> KeeperErrorCode = >> > BadVersion for >> > /brokers/topics/merchimpressions_client_log/partitions/2/state >> > (kafka.utils.ZkUtils$) >> > [2014-05-12 21:24:55,963] INFO Partition [merchimpressions_client_log,2] >> > on broker 0: Cached zkVersion [14] not equal to that in zookeeper, skip >> > updating ISR (kafka.cluster.Partition) >> > [2014-05-12 21:24:55,963] INFO Partition [request_trace,2] on broker 0: >> > Shrinking ISR for partition [request_trace,2] from 0,1 to 0 >> > (kafka.cluster.Partition) >> > [2014-05-12 21:24:55,967] ERROR Conditional update of path >> > /brokers/topics/request_trace/partitions/2/state with data >> > >> {"controller_epoch":10,"leader":0,"version":1,"leader_epoch":18,"isr":[0]} >> > and expected version 44 failed due to >> > org.apache.zookeeper.KeeperException$BadVersionException: >> KeeperErrorCode = >> > BadVersion for /brokers/topics/request_trace/partitions/2/state >> > (kafka.utils.ZkUtils$) >> > [2014-05-12 21:24:55,967] INFO Partition [request_trace,2] on broker 0: >> > Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR >> > (kafka.cluster.Partition) >> > [2014-05-12 21:24:55,967] INFO Partition [hermesdiagnostics,3] on broker >> > 0: Shrinking ISR for partition [hermesdiagnostics,3] from 0,2 to 0 >> > (kafka.cluster.Partition) >> > [2014-05-12 21:24:55,970] INFO running health check command >> > (com.netflix.nfkafka.initialization.KafkaHealthCheckHandler) >> > [2014-05-12 21:24:55,971] ERROR Conditional update of path >> > /brokers/topics/hermesdiagnostics/partitions/3/state with data >> > >> {"controller_epoch":10,"leader":0,"version":1,"leader_epoch":22,"isr":[0]} >> > and expected version 326 failed due to >> > org.apache.zookeeper.KeeperException$BadVersionException: >> KeeperErrorCode = >> > BadVersion for /brokers/topics/hermesdiagnostics/partitions/3/state >> > (kafka.utils.ZkUtils$) >> > >> > >