[ 
https://issues.apache.org/jira/browse/KAFKA-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020705#comment-16020705
 ] 

ASF GitHub Bot commented on KAFKA-5310:
---------------------------------------

GitHub user onurkaraman opened a pull request:

    https://github.com/apache/kafka/pull/3122

    KAFKA-5310: reset ControllerContext during resignation

    This ticket is all about ControllerContext initialization and teardown. The 
key points are:
    1. we should teardown ControllerContext during resignation instead of 
waiting on election to fix it up. A heapdump shows that the former controller 
keeps pretty much all of its ControllerContext state laying around.
    2. we don't properly teardown/reset 
ControllerContext.partitionsBeingReassigned. This can cause problems when the 
former controller becomes re-elected as controller at a later point in time.
    
    Suppose a partition assignment is initially R0. Now suppose a reassignment 
R1 gets stuck during controller C0 and an admin tries to "undo" R1 (by deleting 
/admin/partitions_reassigned, deleting /controller, and submitting another 
reassignment specifying R0). The new controller C1 may succeed with R0. If the 
controller moves back to C0, it will then reattempt R1 even though that 
partition reassignment has been cleared from zookeeper prior to shifting the 
controller back to C0. This results in the actual partition reassignment in 
zookeeper being unexpectedly changed back to R1.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/onurkaraman/kafka KAFKA-5310

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/3122.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3122
    
----
commit 256ff84b8263295f393b1c9eb0fdcdc35e5d67e8
Author: Onur Karaman <okara...@linkedin.com>
Date:   2017-05-23T06:28:47Z

    KAFKA-5310: reset ControllerContext during resignation
    
    This ticket is all about ControllerContext initialization and teardown. The 
key points are:
    1. we should teardown ControllerContext during resignation instead of 
waiting on election to fix it up. A heapdump shows that the former controller 
keeps pretty much all of its ControllerContext state laying around.
    2. we don't properly teardown/reset 
ControllerContext.partitionsBeingReassigned. This can cause problems when the 
former controller becomes re-elected as controller at a later point in time.
    
    Suppose a partition assignment is initially R0. Now suppose a reassignment 
R1 gets stuck during controller C0 and an admin tries to "undo" R1 (by deleting 
/admin/partitions_reassigned, deleting /controller, and submitting another 
reassignment specifying R0). The new controller C1 may succeed with R0. If the 
controller moves back to C0, it will then reattempt R1 even though that 
partition reassignment has been cleared from zookeeper prior to shifting the 
controller back to C0. This results in the actual partition reassignment in 
zookeeper being unexpectedly changed back to R1.

----


> reset ControllerContext during resignation
> ------------------------------------------
>
>                 Key: KAFKA-5310
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5310
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Onur Karaman
>            Assignee: Onur Karaman
>
> This ticket is all about ControllerContext initialization and teardown. The 
> key points are:
> 1. we should teardown ControllerContext during resignation instead of waiting 
> on election to fix it up. A heapdump shows that the former controller keeps 
> pretty much all of its ControllerContext state laying around.
> 2. we don't properly teardown/reset 
> {{ControllerContext.partitionsBeingReassigned}}. This caused problems for us 
> in a production cluster at linkedin as shown in the scenario below:
> {code}
> > rm -rf /tmp/zookeeper/ /tmp/kafka-logs* logs*
> > ./gradlew clean jar
> > ./bin/zookeeper-server-start.sh config/zookeeper.properties
> > export LOG_DIR=logs0 && ./bin/kafka-server-start.sh 
> > config/server0.properties
> > export LOG_DIR=logs1 && ./bin/kafka-server-start.sh 
> > config/server1.properties
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t 
> > --replica-assignment 1
> > ./bin/zookeeper-shell.sh localhost:2181
> get /brokers/topics/t
> {"version":1,"partitions":{"0":[1]}}
> create /admin/reassign_partitions 
> {"partitions":[{"topic":"t","partition":0,"replicas":[1,2]}],"version":1}
> Created /admin/reassign_partitions
> get /brokers/topics/t
> {"version":1,"partitions":{"0":[1,2]}}
> get /admin/reassign_partitions
> {"version":1,"partitions":[{"topic":"t","partition":0,"replicas":[1,2]}]}
> delete /admin/reassign_partitions
> delete /controller
> get /brokers/topics/t
> {"version":1,"partitions":{"0":[1,2]}}
> get /admin/reassign_partitions
> Node does not exist: /admin/reassign_partitions
> > echo 
> > '{"partitions":[{"topic":"t","partition":0,"replicas":[1]}],"version":1}' > 
> > reassignment.txt
> > ./bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 
> > --reassignment-json-file reassignment.txt --execute
> get /brokers/topics/t
> {"version":1,"partitions":{"0":[1]}}
> get /admin/reassign_partitions
> Node does not exist: /admin/reassign_partitions
> delete /controller
> get /brokers/topics/t
> {"version":1,"partitions":{"0":[1,2]}}
> get /admin/reassign_partitions
> Node does not exist: /admin/reassign_partitions
> {code}
> Notice that the replica set goes from \[1\] to \[1,2\] (as expected with the 
> explicit {{/admin/reassign_partitions}} znode creation during the initial 
> controller) back to \[1\] (as expected with the partition reassignment during 
> the second controller) and again back to \[1,2\] after the original 
> controller gets re-elected.
> That last transition from \[1\] to \[1,2\] is unexpected. It's due to the 
> original controller not resetting its 
> {{ControllerContext.partitionsBeingReassigned}} correctly. 
> {{initializePartitionReassignment}} simply adds to what's already in 
> {{ControllerContext.partitionsBeingReassigned}}.
> The explicit {{/admin/reassign_partitions}} znode creation is to circumvent 
> KAFKA-5161 (95b48b157aca44beec4335e62a59f37097fe7499). Doing so is valid 
> since:
> 1. our code in production doesn't have that change
> 2. KAFKA-5161 doesn't address the underlying race condition between a broker 
> failure and the ReassignPartitionsCommand tool creating the znode.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to