splett2 commented on a change in pull request #11255:
URL: https://github.com/apache/kafka/pull/11255#discussion_r698693107



##########
File path: core/src/main/scala/kafka/controller/KafkaController.scala
##########
@@ -628,9 +628,9 @@ class KafkaController(val config: KafkaConfig,
       topicDeletionManager.failReplicaDeletion(newOfflineReplicasForDeletion)
     }
 
-    // If replica failure did not require leader re-election, inform brokers 
of the offline brokers
+    // If no partitions are affected, inform brokers of the offline brokers
     // Note that during leader re-election, brokers update their metadata
-    if (partitionsWithOfflineLeader.isEmpty) {
+    if (newOfflineReplicas.isEmpty || 
(newOfflineReplicasNotForDeletion.isEmpty && 
partitionsWithOfflineLeader.isEmpty)) {

Review comment:
       I think that makes sense. Do we need to do something similar for both 
`partitionStateMachine` calls?
   `handleStateChanges` already returns a `Map[TopicPartition, 
Either[Throwable, LeaderAndIsr]]` so it seems like we could just look to see if 
there are any values with `LeaderAndIsr` and that would imply that we sent out 
`updateMetadataRequest` as part of Online/Offline state changes. and likewise, 
we can have `triggerOnlinePartitionStateChange` return the same results.
   
   I am thinking something along the lines of:
   ```
       // trigger OfflinePartition state for all partitions whose current 
leader is one amongst the newOfflineReplicas
       val offlineStateChangeResults = 
partitionStateMachine.handleStateChanges(partitionsWithOfflineLeader.toSeq, 
OfflinePartition)
       // trigger OnlinePartition state changes for offline or new partitions
       val onlineStateChangeResults = 
partitionStateMachine.triggerOnlinePartitionStateChange()
   
   ...
       val leaderElectionSucceeded = (offlineStateChangeResults ++ 
onlineStateChangeResults).find(_)
       // (!leaderElectionSucceeded && newOfflineReplicasForDeletion.isEmpty) {
         
sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq, 
Set.empty)
       }
   ```

##########
File path: core/src/main/scala/kafka/controller/KafkaController.scala
##########
@@ -628,9 +628,9 @@ class KafkaController(val config: KafkaConfig,
       topicDeletionManager.failReplicaDeletion(newOfflineReplicasForDeletion)
     }
 
-    // If replica failure did not require leader re-election, inform brokers 
of the offline brokers
+    // If no partitions are affected, inform brokers of the offline brokers
     // Note that during leader re-election, brokers update their metadata
-    if (partitionsWithOfflineLeader.isEmpty) {
+    if (newOfflineReplicas.isEmpty || 
(newOfflineReplicasNotForDeletion.isEmpty && 
partitionsWithOfflineLeader.isEmpty)) {

Review comment:
       I think that makes sense. Do we need to do something similar for both 
`partitionStateMachine` calls?
   `handleStateChanges` already returns a `Map[TopicPartition, 
Either[Throwable, LeaderAndIsr]]` so it seems like we could just look to see if 
there are any values with `LeaderAndIsr` and that would imply that we sent out 
`updateMetadataRequest` as part of Online/Offline state changes. and likewise, 
we can have `triggerOnlinePartitionStateChange` return the same results.
   
   I am thinking something along the lines of:
   ```
       // trigger OfflinePartition state for all partitions whose current 
leader is one amongst the newOfflineReplicas
       val offlineStateChangeResults = 
partitionStateMachine.handleStateChanges(partitionsWithOfflineLeader.toSeq, 
OfflinePartition)
       // trigger OnlinePartition state changes for offline or new partitions
       val onlineStateChangeResults = 
partitionStateMachine.triggerOnlinePartitionStateChange()
   
   ...
       val leaderElectionSucceeded = (offlineStateChangeResults ++ 
onlineStateChangeResults).find(etc)
       // (!leaderElectionSucceeded && newOfflineReplicasForDeletion.isEmpty) {
         
sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq, 
Set.empty)
       }
   ```

##########
File path: core/src/main/scala/kafka/controller/KafkaController.scala
##########
@@ -628,9 +628,9 @@ class KafkaController(val config: KafkaConfig,
       topicDeletionManager.failReplicaDeletion(newOfflineReplicasForDeletion)
     }
 
-    // If replica failure did not require leader re-election, inform brokers 
of the offline brokers
+    // If no partitions are affected, inform brokers of the offline brokers
     // Note that during leader re-election, brokers update their metadata
-    if (partitionsWithOfflineLeader.isEmpty) {
+    if (newOfflineReplicas.isEmpty || 
(newOfflineReplicasNotForDeletion.isEmpty && 
partitionsWithOfflineLeader.isEmpty)) {

Review comment:
       I think that makes sense. Do we need to do something similar for both 
`partitionStateMachine` calls?
   `handleStateChanges` already returns a `Map[TopicPartition, 
Either[Throwable, LeaderAndIsr]]` so it seems like we could just look to see if 
there are any values with `LeaderAndIsr` and that would imply that we sent out 
`updateMetadataRequest` as part of Online/Offline state changes. and likewise, 
we can have `triggerOnlinePartitionStateChange` return the same results.
   
   I am thinking something along the lines of:
   ```
       // trigger OfflinePartition state for all partitions whose current 
leader is one amongst the newOfflineReplicas
       val offlineStateChangeResults = 
partitionStateMachine.handleStateChanges(partitionsWithOfflineLeader.toSeq, 
OfflinePartition)
       // trigger OnlinePartition state changes for offline or new partitions
       val onlineStateChangeResults = 
partitionStateMachine.triggerOnlinePartitionStateChange()
   
   ...
       val leaderElectionSucceeded = (offlineStateChangeResults ++ 
onlineStateChangeResults).find(etc, etc)
       // (!leaderElectionSucceeded && newOfflineReplicasForDeletion.isEmpty) {
         
sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq, 
Set.empty)
       }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to