[ 
https://issues.apache.org/jira/browse/KAFKA-2551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900422#comment-14900422
 ] 

jin xing edited comment on KAFKA-2551 at 9/21/15 9:31 AM:
----------------------------------------------------------

yes I agree with you;
currently we can configure via unclean.leader.election.enable;

for this parameter, I did an experiment:
I have 2 brokers(broker0 and broker1) and 1 topic named "p1r2" with 1 partition 
and 2 replicas;
firstly, I use default unclean.leader.election.enable=true; at this moment log 
size=1000;
then I shutdown broker1, and use console producer to send messages into kafka 
queue, tail log size=1009;
then I shutdown broker0, and restart broker1, found log size=1000, but the 
leader changed to be broker1;
but when I want to send messages to kafka(at this moment, only broker1 alive), 
I got exception as below:


[2015-09-21 17:06:06,654] ERROR fetching topic metadata for topics [Set(p1r2)] 
from broker [ArrayBuffer(id:0,host:soho-pipe-kafka-test1-test,port:9092, 
id:1,host:soho-pipe-kafka-test2-test:9092,port:9092)] failed 
(kafka.utils.Utils$)
kafka.common.KafkaException: fetching topic metadata for topics [Set(p1r2)] 
from broker [ArrayBuffer(id:0,host:soho-pipe-kafka-test1-test,port:9092, 
id:1,host:soho-pipe-kafka-test2-test:9092,port:9092)] failed
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
        at 
kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at 
kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
        at kafka.utils.Utils$.swallow(Utils.scala:172)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:45)
        at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
        at 
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
        at scala.collection.immutable.Stream.foreach(Stream.scala:547)
        at 
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
        at 
kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
Caused by: java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
        at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
        ... 12 more
[2015-09-21 17:06:06,655] ERROR Failed to send requests for topics p1r2 with 
correlation ids in [17,24] (kafka.producer.async.DefaultEventHandler)
[2015-09-21 17:06:06,655] ERROR Error in handling batch of 1 events 
(kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 
tries.
        at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
        at 
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
        at scala.collection.immutable.Stream.foreach(Stream.scala:547)
        at 
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
        at 
kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
with unclean.leader.election.enable, I was expecting that I can still send 
messages to broker1, even though I lost messages on broker0;
Am I wrong?


was (Author: jinxing6...@126.com):
yes I agree with you;
currently we can configure via unclean.leader.election.enable;

for this parameter, I did an experiment:
I have 2 brokers(broker0 and broker1) and 1 topic named "p1r2" with 1 partition 
and 2 replicas;
firstly, I use default unclean.leader.election.enable=true; at this moment log 
size=1000;
then I shutdown broker1, and use console producer to send messages into kafka 
queue, tail log size=1009;
then I shutdown broker0, and restart broker1, found log size=1000, but the 
leader changed to be broker1;
but when I want to send messages to kafka(at this moment, only broker1 alive), 
I got exception as below:
[2015-09-21 17:06:06,654] ERROR fetching topic metadata for topics [Set(p1r2)] 
from broker [ArrayBuffer(id:0,host:soho-pipe-kafka-test1-test,port:9092, 
id:1,host:soho-pipe-kafka-test2-test:9092,port:9092)] failed 
(kafka.utils.Utils$)
kafka.common.KafkaException: fetching topic metadata for topics [Set(p1r2)] 
from broker [ArrayBuffer(id:0,host:soho-pipe-kafka-test1-test,port:9092, 
id:1,host:soho-pipe-kafka-test2-test:9092,port:9092)] failed
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
        at 
kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at 
kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
        at kafka.utils.Utils$.swallow(Utils.scala:172)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:45)
        at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
        at 
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
        at scala.collection.immutable.Stream.foreach(Stream.scala:547)
        at 
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
        at 
kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
Caused by: java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
        at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
        ... 12 more
[2015-09-21 17:06:06,655] ERROR Failed to send requests for topics p1r2 with 
correlation ids in [17,24] (kafka.producer.async.DefaultEventHandler)
[2015-09-21 17:06:06,655] ERROR Error in handling batch of 1 events 
(kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 
tries.
        at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
        at 
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
        at scala.collection.immutable.Stream.foreach(Stream.scala:547)
        at 
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
        at 
kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
with unclean.leader.election.enable, I was expecting that I can still send 
messages to broker1, even though I lost messages on broker0;
Am I wrong?

> Unclean leader election docs outdated
> -------------------------------------
>
>                 Key: KAFKA-2551
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2551
>             Project: Kafka
>          Issue Type: Bug
>          Components: website
>    Affects Versions: 0.8.2.2
>            Reporter: Stevo Slavic
>            Assignee: jin xing
>            Priority: Trivial
>              Labels: documentation, newbie
>
> Current unclean leader election docs state:
> {quote}
> In the future, we would like to make this configurable to better support use 
> cases where downtime is preferable to inconsistency.
> {quote}
> Since 0.8.2.0, unclean leader election strategy (whether to allow it or not) 
> is already configurable via {{unclean.leader.election.enable}} broker config 
> property.
> That sentence is in both 
> https://svn.apache.org/repos/asf/kafka/site/083/design.html and 
> https://svn.apache.org/repos/asf/kafka/site/082/design.html near the end of 
> "Unclean leader election: What if they all die?" section. Next section, 
> "Availability and Durability Guarantees", mentions ability to disable unclean 
> leader election, so likely just this one reference needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to