Sandeep,         You need to have multiple replicas. Having single replica 
means you've one copy of the data and if that machine goes down there isn't 
another replica who can take over and be the leader for that partition.-Harsha 



    _____________________________
From: Sandeep Bishnoi <sandeepbishnoi.b...@gmail.com>
Sent: Friday, June 19, 2015 2:37 PM
Subject: Failure in Leader Election on broker shutdown
To:  <users@kafka.apache.org>


Hi,

 I have a kafka cluster of three nodes.

 I have constructed a topic with the following command:
 bin/kafka-topics.sh --create --zookeeper localhost:2181
--replication-factor 1 --partitions 3 --topic testv1p3

 So the topic "testv1p3" has 3 partitions and replication factor is 1.
 Here is the result of describe command:
  kafka_2.10-0.8.2.0]$ bin/kafka-topics.sh --describe --zookeeper
localhost:2181 --topic testv1p3

Topic:testv1p3    PartitionCount:3    ReplicationFactor:1    Configs:
    Topic: testv1p3    Partition: 0    Leader: 1    Replicas: 1    Isr: 1
    Topic: testv1p3    Partition: 1    Leader: 2    Replicas: 2    Isr: 2
    Topic: testv1p3    Partition: 2    Leader: 0    Replicas: 0    Isr: 0

So far things are good.

Now I tried to kill a broker using bin/kafka-server-stop.sh
The broker was stopped successfully.

 Now I wanted to ensure that there is a new leader for the partition which
was hosted on the terminated broker.
 Here is the output of describe command post broker termination:
 Topic:testv1p3    PartitionCount:3    ReplicationFactor:1    Configs:
    Topic: testv1p3    Partition: 0    Leader: 1    Replicas: 1    Isr: 1
    Topic: testv1p3    Partition: 1    Leader: -1    Replicas: 2    Isr:
    Topic: testv1p3    Partition: 2    Leader: 0    Replicas: 0    Isr: 0

Leader for partition:1 is -1.

Java API for kafka returns null for leader() in PartitionMetadata for
partition 1.

When I restarted the broker which was stopped earlier.
Things go back to normal.

1) Does leader selection happen automatically ?
2) If yes, do I need any particular configuration in broker or topic config
?
3) If not, what is command to ensure that I have a leader for partition 1
in case its lead broker goes down.
 FYI I tried to run bin/bin/kafka-preferred-replica-election.sh --zookeeper
localhost:2181
 Post this script run, the topic description still remains same and no
leader for partition 1.

It will be great to get any help on this.


Reference:
Console log for (kafka-server-stop.sh):
[2015-06-19 14:25:00,241] INFO [Kafka Server 2], shutting down
(kafka.server.KafkaServer)
[2015-06-19 14:25:00,243] INFO [Kafka Server 2], Starting controlled
shutdown (kafka.server.KafkaServer)
[2015-06-19 14:25:00,267] INFO [Kafka Server 2], Controlled shutdown
succeeded (kafka.server.KafkaServer)
[2015-06-19 14:25:00,273] INFO Deregistered broker 2 at path
/brokers/ids/2. (kafka.utils.ZkUtils$)
[2015-06-19 14:25:00,274] INFO [Socket Server on Broker 2], Shutting down
(kafka.network.SocketServer)
[2015-06-19 14:25:00,279] INFO [Socket Server on Broker 2], Shutdown
completed (kafka.network.SocketServer)
[2015-06-19 14:25:00,280] INFO [Kafka Request Handler on Broker 2],
shutting down (kafka.server.KafkaRequestHandlerPool)
[2015-06-19 14:25:00,282] INFO [Kafka Request Handler on Broker 2], shut
down completely (kafka.server.KafkaRequestHandlerPool)
[2015-06-19 14:25:00,600] INFO [Replica Manager on Broker 2]: Shut down
(kafka.server.ReplicaManager)
[2015-06-19 14:25:00,601] INFO [ReplicaFetcherManager on broker 2] shutting
down (kafka.server.ReplicaFetcherManager)
[2015-06-19 14:25:00,602] INFO [ReplicaFetcherManager on broker 2] shutdown
completed (kafka.server.ReplicaFetcherManager)
[2015-06-19 14:25:00,604] INFO [Replica Manager on Broker 2]: Shut down
completely (kafka.server.ReplicaManager)
[2015-06-19 14:25:00,605] INFO Shutting down. (kafka.log.LogManager)
[2015-06-19 14:25:00,618] INFO Shutdown complete. (kafka.log.LogManager)
[2015-06-19 14:25:00,620] WARN Kafka scheduler has not been started
(kafka.utils.Utils$)
java.lang.IllegalStateException: Kafka scheduler has not been started
    at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
    at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
    at
kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
    at kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
    at
kafka.server.KafkaServer$$anonfun$shutdown$9.apply$mcV$sp(KafkaServer.scala:287)
    at kafka.utils.Utils$.swallow(Utils.scala:172)
    at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
    at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
    at kafka.utils.Logging$class.swallow(Logging.scala:94)
    at kafka.utils.Utils$.swallow(Utils.scala:45)
    at kafka.server.KafkaServer.shutdown(KafkaServer.scala:287)
    at
kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
    at kafka.Kafka$$anon$1.run(Kafka.scala:42)
[2015-06-19 14:25:00,623] INFO Terminate ZkClient event thread.
(org.I0Itec.zkclient.ZkEventThread)
[2015-06-19 14:25:00,625] INFO Session: 0x14de8e5f2b801f7 closed
(org.apache.zookeeper.ZooKeeper)
[2015-06-19 14:25:00,625] INFO EventThread shut down
(org.apache.zookeeper.ClientCnxn)
[2015-06-19 14:25:00,625] INFO [Kafka Server 2], shut down completed
(kafka.server.KafkaServer)


Regards,
Sandeep

Reply via email to