cmccabe commented on code in PR #19454:
URL: https://github.com/apache/kafka/pull/19454#discussion_r2042533671


##########
metadata/src/main/java/org/apache/kafka/controller/ClusterControlManager.java:
##########
@@ -309,8 +309,10 @@ public void activate() {
         long nowNs = time.nanoseconds();
         for (BrokerRegistration registration : brokerRegistrations.values()) {
             heartbeatManager.register(registration.id(), 
registration.fenced());
-            heartbeatManager.tracker().updateContactTime(
-                new BrokerIdAndEpoch(registration.id(), registration.epoch()), 
nowNs);
+            if (!registration.fenced()) {
+                heartbeatManager.tracker().updateContactTime(

Review Comment:
   > is my interpretation correct? in the single-node demonstration/test case I 
would assume it's unlikely that a broker takes a long time to shutdown cleanly 
and catch up on metadata - so they would still run into the duplicate broker id 
exception
   
   In the single node case, the controller process is restarted, so the 
in-memory cache is cleared.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to