-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32890/#review79533
-----------------------------------------------------------


Thanks for the patch. Interesting improvement. Some comments below.


core/src/main/scala/kafka/controller/ControllerChannelManager.scala
<https://reviews.apache.org/r/32890/#comment129017>

    Maybe change the name to ensurePreviousRequestsFinish()?



core/src/main/scala/kafka/controller/ControllerChannelManager.scala
<https://reviews.apache.org/r/32890/#comment129029>

    It looks we only want to throttle the controlled shutdown and automatic 
preferred leader election. But this send method is used in all situations:
    manual preferred leader election, partition reassigment, controller 
failover, broker startup, topic deletion...
    All the request from those situation affect the tracker semaphore value as 
well. Is this exepected?
    There are some other issues like those events are more likely to interleave 
with each other with throttleing which we also might want to consider. Although 
it might out of ths scope of this ticket. I just want to make sure we won't 
make things worse.



core/src/main/scala/kafka/controller/ControllerChannelManager.scala
<https://reviews.apache.org/r/32890/#comment129015>

    It seems info.messageQueue.remainingCapacity should be 
info.messsageQueue.size



core/src/main/scala/kafka/controller/ControllerChannelManager.scala
<https://reviews.apache.org/r/32890/#comment129030>

    Although it is not likely we will hit the limit, but throwing an exception 
does not look a right thing to do here. Maybe we should just use acquire and 
block until some messaegs are sent.



core/src/main/scala/kafka/controller/KafkaController.scala
<https://reviews.apache.org/r/32890/#comment129031>

    Can you remove the "ODKL Patch".



core/src/main/scala/kafka/controller/KafkaController.scala
<https://reviews.apache.org/r/32890/#comment129033>

    Exsiting code, but I think we should make the entire preferred leader 
election on a broker atomic to controlled shutdown.



core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala
<https://reviews.apache.org/r/32890/#comment129040>

    It looks with the current replica assigment algorithm, the replica 
assigment has already taken care of the distribution. So the second replica for 
differnt partitions are on different brokers. It should not overwhelm a 
particular broker in this case. Unless you have replicas on all the brokers. We 
might need to think about a more general solution for all the scenarios in this 
case.



core/src/main/scala/kafka/server/ReplicaManager.scala
<https://reviews.apache.org/r/32890/#comment128942>

    This seems not needed anymore after KAFKA-1647 is checked in, because we 
now always restore the HW on each makeLeader and makeFollower. Can you double 
check?


- Jiangjie Qin


On April 6, 2015, 5:28 p.m., Dmitry Bugaychenko wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/32890/
> -----------------------------------------------------------
> 
> (Updated April 6, 2015, 5:28 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2029
>     https://issues.apache.org/jira/browse/KAFKA-2029
> 
> 
> Repository: kafka
> 
> 
> Description
> -------
> 
> Controlled shutdown patch
> 
> 
> Diffs
> -----
> 
>   core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
> 97acdb23f6e95554c3e0357aa112eddfc875efbc 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 3a09377611b48198c4c3cd1a118fc12eda0543d4 
>   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
> 3b15ab4eef22c6f50a7483e99a6af40fb55aca9f 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 144a15e3a53d47fa972bef277c335da484842cf9 
> 
> Diff: https://reviews.apache.org/r/32890/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Dmitry Bugaychenko
> 
>

Reply via email to