jeffkbkim commented on code in PR #12783:
URL: https://github.com/apache/kafka/pull/12783#discussion_r1007544878


##########
core/src/main/scala/kafka/server/ReplicaFetcherThread.scala:
##########
@@ -132,9 +141,17 @@ class ReplicaFetcherThread(name: String,
 
     brokerTopicStats.updateReplicationBytesIn(records.sizeInBytes)
 
+    logAppendInfo.foreach { _ => partitionsWithNewRecords += topicPartition }

Review Comment:
   i believe that is the case. from 
[UnifiedLog.updatedHighWatermarkWithLogEndOffset()](https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/UnifiedLog.scala#L597-L603)
   ```
     private def updateHighWatermarkWithLogEndOffset(): Unit = {
       // Update the high watermark in case it has gotten ahead of the log end 
offset following a truncation
       // or if a new segment has been rolled and the offset metadata needs to 
be updated.
       if (highWatermark >= localLog.logEndOffset) {
         updateHighWatermarkMetadata(localLog.logEndOffsetMetadata)
       }
     }
   ```
   this is called on every `UnifiedLog.append()` and `UnifiedLog.roll()`
   
   however, i did notice that `LeaderHWChange` is not set anywhere else except 
in the produce code path. from `Partition.appendRecordsToLeader()`:
   ```
   info.copy(leaderHwChange = if (leaderHWIncremented) LeaderHwChange.Increased 
else LeaderHwChange.Same)
   ```
   no other `LogAppendInfo` initialization passes in LeaderHWChange.
   
   so, the follower would need to keep track of whether the high watermark 
changed for each partition which i don't think is something we want to do. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to