dengziming commented on code in PR #12265: URL: https://github.com/apache/kafka/pull/12265#discussion_r914533237
########## core/src/main/scala/kafka/server/metadata/BrokerMetadataSnapshotter.scala: ########## @@ -45,26 +45,37 @@ class BrokerMetadataSnapshotter( */ private var _currentSnapshotOffset = -1L + /** + * The offset of the newest snapshot, or -1 if there hasn't been one.Accessed only under + * the object lock. + */ + private var _latestSnapshotOffset = -1L + /** * The event queue which runs this listener. */ val eventQueue = new KafkaEventQueue(time, logContext, threadNamePrefix.getOrElse("")) override def maybeStartSnapshot(lastContainedLogTime: Long, image: MetadataImage): Boolean = synchronized { - if (_currentSnapshotOffset == -1L) { + if (_currentSnapshotOffset != -1) { + warn(s"Declining to create a new snapshot at ${image.highestOffsetAndEpoch()} because " + + s"there is already a snapshot in progress at offset ${_currentSnapshotOffset}") + false + } else if (_latestSnapshotOffset >= image.highestOffsetAndEpoch().offset) { Review Comment: Yes, the test failed when generating a snapshot twice at the same offset firstly due to enough bytes having accumulated and secondly due to the metadata version changed. I changed it to "==" to make it more accurate and added a unit test for it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org