[ 
https://issues.apache.org/jira/browse/KAFKA-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17939055#comment-17939055
 ] 

Gunnar Morling edited comment on KAFKA-19047 at 3/27/25 7:42 PM:
-----------------------------------------------------------------

For reference, here are the logs I've observed in that situation:

{code}
bin/kafka-server-start.sh config/server.properties
[2025-03-27 20:40:54,651] INFO Registered kafka:type=kafka.Log4jController 
MBean (kafka.utils.Log4jControllerRegistration$)
[2025-03-27 20:40:54,803] INFO Registered signal handlers for TERM, INT, HUP 
(org.apache.kafka.common.utils.LoggingSignalHandler)
[2025-03-27 20:40:54,804] INFO [ControllerServer id=1] Starting controller 
(kafka.server.ControllerServer)
[2025-03-27 20:40:54,932] INFO Updated connection-accept-rate max connection 
creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2025-03-27 20:40:54,947] INFO [SocketServer listenerType=CONTROLLER, nodeId=1] 
Created data-plane acceptor and processors for endpoint : 
ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2025-03-27 20:40:54,950] INFO authorizerStart completed for endpoint 
CONTROLLER. Endpoint is now READY. 
(org.apache.kafka.server.network.EndpointReadyFutures)
[2025-03-27 20:40:54,951] INFO [SharedServer id=1] Starting SharedServer 
(kafka.server.SharedServer)
[2025-03-27 20:40:54,971] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Recovering unflushed segment 0. 0 recovered for 
__cluster_metadata-0. (org.apache.kafka.storage.internals.log.LogLoader)
[2025-03-27 20:40:54,977] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:54,978] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Reloading from producer snapshot and rebuilding 
producer state from offset 0 (org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:54,978] INFO Deleted producer state snapshot 
/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000005680.snapshot 
(org.apache.kafka.storage.internals.log.SnapshotFile)
[2025-03-27 20:40:54,978] INFO Deleted producer state snapshot 
/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000005844.snapshot 
(org.apache.kafka.storage.internals.log.SnapshotFile)
[2025-03-27 20:40:54,979] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Producer state recovery took 1ms for snapshot 
load and 0ms for segment recovery from offset 0 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,014] INFO [ProducerStateManager 
partition=__cluster_metadata-0] Wrote producer snapshot at offset 5844 with 0 
producer ids in 9 ms. 
(org.apache.kafka.storage.internals.log.ProducerStateManager)
[2025-03-27 20:40:55,017] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Loading producer state till offset 5844 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,017] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Reloading from producer snapshot and rebuilding 
producer state from offset 5844 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,017] INFO [ProducerStateManager 
partition=__cluster_metadata-0] Loading producer state from snapshot file 
'SnapshotFile(offset=5844, 
file=/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000005844.snapshot)'
 (org.apache.kafka.storage.internals.log.ProducerStateManager)
[2025-03-27 20:40:55,018] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Producer state recovery took 1ms for snapshot 
load and 0ms for segment recovery from offset 5844 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,034] INFO Initialized snapshots with IDs 
SortedSet(OffsetAndEpoch(offset=0, epoch=0)) from 
/tmp/kraft-combined-logs/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2025-03-27 20:40:55,039] INFO [raft-expiration-reaper]: Starting 
(kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2025-03-27 20:40:55,043] INFO [RaftManager id=1] Starting request manager with 
bootstrap servers: [localhost:9093 (id: -2 rack: null isFenced: false)] 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,045] INFO [RaftManager id=1] Reading KRaft snapshot and 
log as part of the initialization (org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,046] INFO [RaftManager id=1] Loading snapshot 
(OffsetAndEpoch(offset=0, epoch=0)) since log start offset (0) is greater than 
the internal listener's next offset (-1) 
(org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,048] INFO [RaftManager id=1] Latest kraft.version is 
KRAFT_VERSION_1 at offset -1 
(org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,049] INFO [RaftManager id=1] Latest set of voters is 
VoterSet(voters=\{1=VoterNode(voterKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
listeners=Endpoints(endpoints={ListenerName(CONTROLLER)=localhost/<unresolved>:9093}),
 supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:1])}) 
at offset -1 (org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,051] INFO [RaftManager id=1] Latest kraft.version is 
KRAFT_VERSION_1 at offset 1 
(org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,051] INFO [RaftManager id=1] Latest set of voters is 
VoterSet(voters=\{1=VoterNode(voterKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
listeners=Endpoints(endpoints={ListenerName(CONTROLLER)=localhost/<unresolved>:9093}),
 supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:1])}) 
at offset 2 (org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,062] INFO [RaftManager id=1] Starting voters are 
VoterSet(voters=\{1=VoterNode(voterKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
listeners=Endpoints(endpoints={ListenerName(CONTROLLER)=localhost/<unresolved>:9093}),
 supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:1])}) 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,067] INFO [RaftManager id=1] Attempting durable transition 
to ResignedState(localId=1, epoch=3, voters=[1], electionTimeoutMs=1002, 
unackedVoters=[], preferredSuccessors=[]) from null 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,077] INFO [RaftManager id=1] Completed transition to 
ResignedState(localId=1, epoch=3, voters=[1], electionTimeoutMs=1002, 
unackedVoters=[], preferredSuccessors=[]) from null 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,080] INFO [RaftManager id=1] Completed transition to 
ProspectiveState(epoch=3, leaderId=OptionalInt[1], retries=1, 
votedKey=Optional.empty, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), electionTimeoutMs=1397, 
highWatermark=Optional.empty) from ResignedState(localId=1, epoch=3, 
voters=[1], electionTimeoutMs=1002, unackedVoters=[], preferredSuccessors=[]) 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,081] INFO [RaftManager id=1] Attempting durable transition 
to CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, epoch=4, 
retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) from 
ProspectiveState(epoch=3, leaderId=OptionalInt[1], retries=1, 
votedKey=Optional.empty, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), electionTimeoutMs=1397, 
highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,086] INFO [RaftManager id=1] Completed transition to 
CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, epoch=4, 
retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) from 
ProspectiveState(epoch=3, leaderId=OptionalInt[1], retries=1, 
votedKey=Optional.empty, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), electionTimeoutMs=1397, 
highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,087] INFO [RaftManager id=1] Attempting durable transition 
to Leader(localReplicaKey=ReplicaKey(id=1, directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
epoch=4, epochStartOffset=5844, highWatermark=Optional.empty, 
voterStates=\{1=ReplicaState(replicaKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), endOffset=Optional.empty, 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) 
from CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, 
epoch=4, retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,092] INFO [RaftManager id=1] Completed transition to 
Leader(localReplicaKey=ReplicaKey(id=1, directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
epoch=4, epochStartOffset=5844, highWatermark=Optional.empty, 
voterStates=\{1=ReplicaState(replicaKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), endOffset=Optional.empty, 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) 
from CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, 
epoch=4, retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,099] INFO [kafka-1-raft-io-thread]: Starting 
(org.apache.kafka.raft.KafkaRaftClientDriver)
[2025-03-27 20:40:55,099] INFO [kafka-1-raft-outbound-request-thread]: Starting 
(org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2025-03-27 20:40:55,104] INFO [MetadataLoader id=1] initializeNewPublishers: 
the loader is still catching up because we still don't know the high water mark 
yet. (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,105] INFO [ControllerServer id=1] Waiting for controller 
quorum voters future (kafka.server.ControllerServer)
[2025-03-27 20:40:55,105] INFO [ControllerServer id=1] Finished waiting for 
controller quorum voters future (kafka.server.ControllerServer)
[2025-03-27 20:40:55,107] INFO [RaftManager id=1] High watermark set to 
LogOffsetMetadata(offset=5845, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=421756)]) for 
the first time for epoch 4 based on indexOfHw 0 and voters 
[ReplicaState(replicaKey=ReplicaKey(id=1, directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
endOffset=Optional[LogOffsetMetadata(offset=5845, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=421756)])], 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] 
(org.apache.kafka.raft.LeaderState)
[2025-03-27 20:40:55,118] INFO [RaftManager id=1] Registered the listener 
org.apache.kafka.image.loader.MetadataLoader@1660053996 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,118] INFO [MetadataLoader id=1] 
handleLoadSnapshot(00000000000000000000-0000000000): incrementing 
HandleLoadSnapshotCount to 1. (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,120] INFO [MetadataLoader id=1] 
handleLoadSnapshot(00000000000000000000-0000000000): generated a metadata delta 
between offset -1 and this snapshot in 1409 us. 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,120] INFO [MetadataLoader id=1] 
maybePublishMetadata(SNAPSHOT): The loader is still catching up because we have 
loaded up to offset -1, but the high water mark is 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,120] INFO [MetadataLoader id=1] 
maybePublishMetadata(LOG_DELTA): The loader is still catching up because we 
have loaded up to offset 2, but the high water mark is 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,124] INFO [RaftManager id=1] Registered the listener 
org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@1391447477 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,127] INFO [controller-1-ThrottledChannelReaper-Fetch]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,127] INFO [controller-1-ThrottledChannelReaper-Produce]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,128] INFO [controller-1-ThrottledChannelReaper-Request]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,128] INFO 
[controller-1-ThrottledChannelReaper-ControllerMutation]: Starting 
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,135] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,143] INFO [ControllerServer id=1] Waiting for the 
controller metadata publishers to be installed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,145] INFO [MetadataLoader id=1] 
maybePublishMetadata(LOG_DELTA): The loader finished catching up to the current 
high water mark of 5845 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing SnapshotGenerator with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [ControllerServer id=1] Finished waiting for the 
controller metadata publishers to be installed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,146] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing KRaftMetadataCachePublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing FeaturesPublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [SocketServer listenerType=CONTROLLER, nodeId=1] 
Enabling request processing. (kafka.network.SocketServer)
[2025-03-27 20:40:55,147] INFO [ControllerServer id=1] Loaded new metadata 
Features(metadataVersion=4.0-IV3, finalizedFeatures=\{group.version=1, 
transaction.version=2, metadata.version=25}, finalizedFeaturesEpoch=5844). 
(org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2025-03-27 20:40:55,147] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ControllerRegistrationsPublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,147] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ControllerRegistrationManager with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,147] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DynamicConfigPublisher controller id=1 with a snapshot at offset 
5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,148] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] Found registration for 
6X6DBx4SR5mUuzRfYNKOjQ instead of our incarnation. 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,148] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DynamicClientQuotaPublisher controller id=1 with a snapshot at 
offset 5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,148] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DynamicTopicClusterQuotaPublisher controller id=1 with a snapshot 
at offset 5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,148] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ScramPublisher controller id=1 with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,149] INFO Awaiting socket connections on 0.0.0.0:9093. 
(kafka.network.DataPlaneAcceptor)
[2025-03-27 20:40:55,149] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DelegationTokenPublisher controller id=1 with a snapshot at offset 
5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,149] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ControllerMetadataMetricsPublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,149] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing AclPublisher controller id=1 with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,153] INFO 
[controller-1-to-controller-registration-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,153] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] initialized channel manager. 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Waiting for all of the 
authorizer futures to be completed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Finished waiting for all 
of the authorizer futures to be completed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO 
[controller-1-to-controller-registration-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Waiting for all of the 
SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Finished waiting for all 
of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO [BrokerServer id=1] Transition from SHUTDOWN to 
STARTING (kafka.server.BrokerServer)
[2025-03-27 20:40:55,153] INFO [BrokerServer id=1] Starting broker 
(kafka.server.BrokerServer)
[2025-03-27 20:40:55,154] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] sendControllerRegistration: attempting to 
send ControllerRegistrationRequestData(controllerId=1, 
incarnationId=JyRNnk_XQx-SK2c6gpcqUw, zkMigrationReady=false, 
listeners=[Listener(name='CONTROLLER', host='localhost', port=9093, 
securityProtocol=0)], features=[Feature(name='group.version', 
minSupportedVersion=0, maxSupportedVersion=1), 
Feature(name='transaction.version', minSupportedVersion=0, 
maxSupportedVersion=2), Feature(name='eligible.leader.replicas.version', 
minSupportedVersion=0, maxSupportedVersion=1), Feature(name='kraft.version', 
minSupportedVersion=0, maxSupportedVersion=1), Feature(name='metadata.version', 
minSupportedVersion=7, maxSupportedVersion=25)]) 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,155] INFO [broker-1-ThrottledChannelReaper-Fetch]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,155] INFO [broker-1-ThrottledChannelReaper-Produce]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,155] INFO [broker-1-ThrottledChannelReaper-Request]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,155] INFO 
[broker-1-ThrottledChannelReaper-ControllerMutation]: Starting 
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,165] INFO [BrokerServer id=1] Waiting for controller 
quorum voters future (kafka.server.BrokerServer)
[2025-03-27 20:40:55,165] INFO [BrokerServer id=1] Finished waiting for 
controller quorum voters future (kafka.server.BrokerServer)
[2025-03-27 20:40:55,166] INFO 
[broker-1-to-controller-forwarding-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,166] INFO 
[broker-1-to-controller-forwarding-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,169] INFO [client-metrics-reaper]: Starting 
(org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper)
[2025-03-27 20:40:55,176] INFO Updated connection-accept-rate max connection 
creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2025-03-27 20:40:55,177] INFO [SocketServer listenerType=BROKER, nodeId=1] 
Created data-plane acceptor and processors for endpoint : 
ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2025-03-27 20:40:55,179] INFO 
[broker-1-to-controller-alter-partition-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,179] INFO 
[broker-1-to-controller-alter-partition-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,182] INFO 
[broker-1-to-controller-directory-assignments-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,183] INFO 
[broker-1-to-controller-directory-assignments-channel-manager]: Recorded new 
KRaft controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,189] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,189] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,197] INFO [BrokerServer id=1] Using no op persister 
(kafka.server.BrokerServer)
[2025-03-27 20:40:55,197] INFO [group-coordinator-reaper]: Starting 
(org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper)
[2025-03-27 20:40:55,207] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] Our registration has been persisted to the 
metadata log. (kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,208] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] RegistrationResponseHandler: controller 
acknowledged ControllerRegistrationRequest. 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-0]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-1]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-2]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-3]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,234] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,234] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,235] INFO [BrokerLifecycleManager id=1] Incarnation 
dXYWi31WSr-QyZzpsF7Aww of broker 1 in cluster lnPmGrhbRoq03yW06IptOg is now 
STARTING. (kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,239] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,241] INFO [share-group-lock-timeout-reaper]: Starting 
(org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper)
[2025-03-27 20:40:55,247] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,255] INFO [BrokerServer id=1] Waiting for the broker 
metadata publishers to be installed (kafka.server.BrokerServer)
[2025-03-27 20:40:55,255] INFO [BrokerServer id=1] Finished waiting for the 
broker metadata publishers to be installed (kafka.server.BrokerServer)
[2025-03-27 20:40:55,255] INFO [BrokerServer id=1] Waiting for the controller 
to acknowledge that we are caught up (kafka.server.BrokerServer)
[2025-03-27 20:40:55,255] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing MetadataVersionPublisher(id=1) with a snapshot at offset 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,255] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing BrokerMetadataPublisher with a snapshot at offset 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,255] INFO [BrokerMetadataPublisher id=1] Publishing 
initial metadata at offset OffsetAndEpoch(offset=5845, epoch=4) with 
metadata.version Optional[4.0-IV3]. 
(kafka.server.metadata.BrokerMetadataPublisher)
[2025-03-27 20:40:55,256] INFO Loading logs from log dirs 
ArrayBuffer(/tmp/kraft-combined-logs) (kafka.log.LogManager)
[2025-03-27 20:40:55,257] INFO No logs found to be loaded in 
/tmp/kraft-combined-logs (kafka.log.LogManager)
[2025-03-27 20:40:55,260] INFO Loaded 0 logs in 4ms (kafka.log.LogManager)
[2025-03-27 20:40:55,260] INFO Starting log cleanup with a period of 300000 ms. 
(kafka.log.LogManager)
[2025-03-27 20:40:55,261] INFO Starting log flusher with a default period of 
9223372036854775807 ms. (kafka.log.LogManager)
[2025-03-27 20:40:55,276] INFO [kafka-log-cleaner-thread-0]: Starting 
(kafka.log.LogCleaner$CleanerThread)
[2025-03-27 20:40:55,277] INFO [LogDirFailureHandler]: Starting 
(kafka.server.ReplicaManager$LogDirFailureHandler)
[2025-03-27 20:40:55,277] INFO [AddPartitionsToTxnSenderThread-1]: Starting 
(kafka.server.AddPartitionsToTxnManager)
[2025-03-27 20:40:55,278] INFO [GroupCoordinator id=1] Starting up. 
(org.apache.kafka.coordinator.group.GroupCoordinatorService)
[2025-03-27 20:40:55,278] INFO [GroupCoordinator id=1] Startup complete. 
(org.apache.kafka.coordinator.group.GroupCoordinatorService)
[2025-03-27 20:40:55,279] INFO [TransactionCoordinator id=1] Starting up. 
(kafka.coordinator.transaction.TransactionCoordinator)
[2025-03-27 20:40:55,279] INFO [TxnMarkerSenderThread-1]: Starting 
(kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2025-03-27 20:40:55,280] INFO [TransactionCoordinator id=1] Startup complete. 
(kafka.coordinator.transaction.TransactionCoordinator)
[2025-03-27 20:40:55,283] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing BrokerRegistrationTracker(id=1) with a snapshot at offset 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,347] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,552] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,959] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:56,777] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:58,391] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:01,604] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,077] INFO [BrokerLifecycleManager id=1] Successfully 
registered broker 1 with broker epoch 5868 (kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,087] INFO [BrokerLifecycleManager id=1] The broker has 
caught up. Transitioning from STARTING to RECOVERY. 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,088] INFO [BrokerServer id=1] Finished waiting for the 
controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2025-03-27 20:41:06,088] INFO [BrokerServer id=1] Waiting for the initial 
broker metadata update to be published (kafka.server.BrokerServer)
[2025-03-27 20:41:06,088] INFO [BrokerServer id=1] Finished waiting for the 
initial broker metadata update to be published (kafka.server.BrokerServer)
[2025-03-27 20:41:06,089] INFO KafkaConfig values:
    add.partitions.to.txn.retry.backoff.max.ms = 100
    add.partitions.to.txn.retry.backoff.ms = 20
    advertised.listeners = 
PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.heartbeat.interval.ms = 2000
    broker.id = 1
    broker.rack = null
    broker.session.timeout.ms = 9000
    client.quota.callback.class = null
    compression.gzip.level = -1
    compression.lz4.level = 9
    compression.type = producer
    compression.zstd.level = 3
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    connections.max.reauth.ms = 0
    controlled.shutdown.enable = true
    controller.listener.names = CONTROLLER
    controller.performance.always.log.threshold.ms = 2000
    controller.performance.sample.period.ms = 60000
    controller.quorum.append.linger.ms = 25
    controller.quorum.bootstrap.servers = [localhost:9093]
    controller.quorum.election.backoff.max.ms = 1000
    controller.quorum.election.timeout.ms = 1000
    controller.quorum.fetch.timeout.ms = 2000
    controller.quorum.request.timeout.ms = 2000
    controller.quorum.retry.backoff.ms = 20
    controller.quorum.voters = []
    controller.quota.window.num = 11
    controller.quota.window.size.seconds = 1
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.max.lifetime.ms = 604800000
    delegation.token.secret.key = null
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    early.start.listeners = null
    fetch.max.bytes = 57671680
    fetch.purgatory.purge.interval.requests = 1000
    group.consumer.assignors = [uniform, range]
    group.consumer.heartbeat.interval.ms = 5000
    group.consumer.max.heartbeat.interval.ms = 15000
    group.consumer.max.session.timeout.ms = 60000
    group.consumer.max.size = 2147483647
    group.consumer.migration.policy = bidirectional
    group.consumer.min.heartbeat.interval.ms = 5000
    group.consumer.min.session.timeout.ms = 45000
    group.consumer.session.timeout.ms = 45000
    group.coordinator.append.linger.ms = 5
    group.coordinator.new.enable = true
    group.coordinator.rebalance.protocols = [classic, consumer]
    group.coordinator.threads = 4
    group.initial.rebalance.delay.ms = 3000
    group.max.session.timeout.ms = 1800000
    group.max.size = 2147483647
    group.min.session.timeout.ms = 6000
    group.share.delivery.count.limit = 5
    group.share.enable = false
    group.share.heartbeat.interval.ms = 5000
    group.share.max.groups = 10
    group.share.max.heartbeat.interval.ms = 15000
    group.share.max.record.lock.duration.ms = 60000
    group.share.max.session.timeout.ms = 60000
    group.share.max.size = 200
    group.share.min.heartbeat.interval.ms = 5000
    group.share.min.record.lock.duration.ms = 15000
    group.share.min.session.timeout.ms = 45000
    group.share.partition.max.record.locks = 200
    group.share.persister.class.name = 
org.apache.kafka.server.share.persister.DefaultStatePersister
    group.share.record.lock.duration.ms = 30000
    group.share.session.timeout.ms = 45000
    initial.broker.registration.timeout.ms = 60000
    inter.broker.listener.name = PLAINTEXT
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    listener.security.protocol.map = 
CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    listeners = PLAINTEXT://:9092,CONTROLLER://:9093
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.max.compaction.lag.ms = 9223372036854775807
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dir.failure.timeout.ms = 30000
    log.dirs = /tmp/kraft-combined-logs
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.initial.task.delay.ms = 30000
    log.local.retention.bytes = -2
    log.local.retention.ms = -2
    log.message.timestamp.after.max.ms = 3600000
    log.message.timestamp.before.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connection.creation.rate = 2147483647
    max.connections = 2147483647
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    max.incremental.fetch.session.cache.slots = 1000
    max.request.partition.size.limit = 2000
    message.max.bytes = 1048588
    metadata.log.dir = null
    metadata.log.max.record.bytes.between.snapshots = 20971520
    metadata.log.max.snapshot.interval.ms = 3600000
    metadata.log.segment.bytes = 1073741824
    metadata.log.segment.min.bytes = 8388608
    metadata.log.segment.ms = 604800000
    metadata.max.idle.interval.ms = 500
    metadata.max.retention.bytes = 104857600
    metadata.max.retention.ms = 604800000
    metric.reporters = [org.apache.kafka.common.metrics.JmxReporter]
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    node.id = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    principal.builder.class = class 
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
    process.roles = [broker, controller]
    producer.id.expiration.check.interval.ms = 600000
    producer.id.expiration.ms = 86400000
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.window.num = 11
    quota.window.size.seconds = 1
    remote.fetch.max.wait.ms = 500
    remote.list.offsets.request.timeout.ms = 30000
    remote.log.index.file.cache.total.size.bytes = 1073741824
    remote.log.manager.copier.thread.pool.size = 10
    remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
    remote.log.manager.copy.quota.window.num = 11
    remote.log.manager.copy.quota.window.size.seconds = 1
    remote.log.manager.expiration.thread.pool.size = 10
    remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
    remote.log.manager.fetch.quota.window.num = 11
    remote.log.manager.fetch.quota.window.size.seconds = 1
    remote.log.manager.task.interval.ms = 30000
    remote.log.manager.task.retry.backoff.max.ms = 30000
    remote.log.manager.task.retry.backoff.ms = 500
    remote.log.manager.task.retry.jitter = 0.2
    remote.log.manager.thread.pool.size = 2
    remote.log.metadata.custom.metadata.max.bytes = 128
    remote.log.metadata.manager.class.name = 
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
    remote.log.metadata.manager.class.path = null
    remote.log.metadata.manager.impl.prefix = rlmm.config.
    remote.log.metadata.manager.listener.name = null
    remote.log.reader.max.pending.tasks = 100
    remote.log.reader.threads = 10
    remote.log.storage.manager.class.name = null
    remote.log.storage.manager.class.path = null
    remote.log.storage.manager.impl.prefix = rsm.config.
    remote.log.storage.system.enable = false
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 30000
    replica.selector.class = null
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = [GSSAPI]
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.connect.timeout.ms = null
    sasl.login.read.timeout.ms = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.login.retry.backoff.max.ms = 10000
    sasl.login.retry.backoff.ms = 100
    sasl.mechanism.controller.protocol = GSSAPI
    sasl.mechanism.inter.broker.protocol = GSSAPI
    sasl.oauthbearer.clock.skew.seconds = 30
    sasl.oauthbearer.expected.audience = null
    sasl.oauthbearer.expected.issuer = null
    sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
    sasl.oauthbearer.jwks.endpoint.url = null
    sasl.oauthbearer.scope.claim.name = scope
    sasl.oauthbearer.sub.claim.name = sub
    sasl.oauthbearer.token.endpoint.url = null
    sasl.server.callback.handler.class = null
    sasl.server.max.receive.size = 524288
    security.inter.broker.protocol = PLAINTEXT
    security.providers = null
    server.max.startup.time.ms = 9223372036854775807
    share.coordinator.append.linger.ms = 10
    share.coordinator.load.buffer.size = 5242880
    share.coordinator.snapshot.update.records.per.snapshot = 500
    share.coordinator.state.topic.compression.codec = 0
    share.coordinator.state.topic.min.isr = 1
    share.coordinator.state.topic.num.partitions = 50
    share.coordinator.state.topic.prune.interval.ms = 300000
    share.coordinator.state.topic.replication.factor = 1
    share.coordinator.state.topic.segment.bytes = 104857600
    share.coordinator.threads = 1
    share.coordinator.write.timeout.ms = 5000
    share.fetch.max.fetch.records = 2147483647
    share.fetch.purgatory.purge.interval.requests = 1000
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    socket.listen.backlog.size = 50
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.allow.dn.changes = false
    ssl.allow.san.changes = false
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.principal.mapping.rules = DEFAULT
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    telemetry.max.bytes = 1048576
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
    transaction.max.timeout.ms = 900000
    transaction.partition.verification.enable = true
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 1
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    unclean.leader.election.interval.ms = 300000
    unstable.api.versions.enable = false
    unstable.feature.versions.enable = false
 (org.apache.kafka.common.config.AbstractConfig)
[2025-03-27 20:41:06,094] INFO [BrokerServer id=1] Waiting for the broker to be 
unfenced (kafka.server.BrokerServer)
[2025-03-27 20:41:06,127] INFO [BrokerLifecycleManager id=1] The broker has 
been unfenced. Transitioning from RECOVERY to RUNNING. 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,131] INFO [BrokerServer id=1] Finished waiting for the 
broker to be unfenced (kafka.server.BrokerServer)
[2025-03-27 20:41:06,131] INFO authorizerStart completed for endpoint 
PLAINTEXT. Endpoint is now READY. 
(org.apache.kafka.server.network.EndpointReadyFutures)
[2025-03-27 20:41:06,131] INFO [SocketServer listenerType=BROKER, nodeId=1] 
Enabling request processing. (kafka.network.SocketServer)
[2025-03-27 20:41:06,132] INFO Awaiting socket connections on 0.0.0.0:9092. 
(kafka.network.DataPlaneAcceptor)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Waiting for all of the 
authorizer futures to be completed (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Finished waiting for all of 
the authorizer futures to be completed (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Waiting for all of the 
SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Finished waiting for all of 
the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Transition from STARTING to 
STARTED (kafka.server.BrokerServer)
[2025-03-27 20:41:06,133] INFO Kafka version: 4.0.0 
(org.apache.kafka.common.utils.AppInfoParser)
[2025-03-27 20:41:06,133] INFO Kafka commitId: 985bc99521dd22bb 
(org.apache.kafka.common.utils.AppInfoParser)
[2025-03-27 20:41:06,133] INFO Kafka startTimeMs: 1743104466132 
(org.apache.kafka.common.utils.AppInfoParser)
[2025-03-27 20:41:06,134] INFO [KafkaRaftServer nodeId=1] Kafka Server started 
(kafka.server.KafkaRaftServer)
{code}


was (Author: gunnar.morling):
For reference, here are the logs I've observed in that situation:

```
bin/kafka-server-start.sh config/server.properties
[2025-03-27 20:40:54,651] INFO Registered kafka:type=kafka.Log4jController 
MBean (kafka.utils.Log4jControllerRegistration$)
[2025-03-27 20:40:54,803] INFO Registered signal handlers for TERM, INT, HUP 
(org.apache.kafka.common.utils.LoggingSignalHandler)
[2025-03-27 20:40:54,804] INFO [ControllerServer id=1] Starting controller 
(kafka.server.ControllerServer)
[2025-03-27 20:40:54,932] INFO Updated connection-accept-rate max connection 
creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2025-03-27 20:40:54,947] INFO [SocketServer listenerType=CONTROLLER, nodeId=1] 
Created data-plane acceptor and processors for endpoint : 
ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2025-03-27 20:40:54,950] INFO authorizerStart completed for endpoint 
CONTROLLER. Endpoint is now READY. 
(org.apache.kafka.server.network.EndpointReadyFutures)
[2025-03-27 20:40:54,951] INFO [SharedServer id=1] Starting SharedServer 
(kafka.server.SharedServer)
[2025-03-27 20:40:54,971] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Recovering unflushed segment 0. 0 recovered for 
__cluster_metadata-0. (org.apache.kafka.storage.internals.log.LogLoader)
[2025-03-27 20:40:54,977] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:54,978] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Reloading from producer snapshot and rebuilding 
producer state from offset 0 (org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:54,978] INFO Deleted producer state snapshot 
/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000005680.snapshot 
(org.apache.kafka.storage.internals.log.SnapshotFile)
[2025-03-27 20:40:54,978] INFO Deleted producer state snapshot 
/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000005844.snapshot 
(org.apache.kafka.storage.internals.log.SnapshotFile)
[2025-03-27 20:40:54,979] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Producer state recovery took 1ms for snapshot 
load and 0ms for segment recovery from offset 0 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,014] INFO [ProducerStateManager 
partition=__cluster_metadata-0] Wrote producer snapshot at offset 5844 with 0 
producer ids in 9 ms. 
(org.apache.kafka.storage.internals.log.ProducerStateManager)
[2025-03-27 20:40:55,017] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Loading producer state till offset 5844 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,017] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Reloading from producer snapshot and rebuilding 
producer state from offset 5844 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,017] INFO [ProducerStateManager 
partition=__cluster_metadata-0] Loading producer state from snapshot file 
'SnapshotFile(offset=5844, 
file=/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000005844.snapshot)'
 (org.apache.kafka.storage.internals.log.ProducerStateManager)
[2025-03-27 20:40:55,018] INFO [LogLoader partition=__cluster_metadata-0, 
dir=/tmp/kraft-combined-logs] Producer state recovery took 1ms for snapshot 
load and 0ms for segment recovery from offset 5844 
(org.apache.kafka.storage.internals.log.UnifiedLog)
[2025-03-27 20:40:55,034] INFO Initialized snapshots with IDs 
SortedSet(OffsetAndEpoch(offset=0, epoch=0)) from 
/tmp/kraft-combined-logs/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2025-03-27 20:40:55,039] INFO [raft-expiration-reaper]: Starting 
(kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2025-03-27 20:40:55,043] INFO [RaftManager id=1] Starting request manager with 
bootstrap servers: [localhost:9093 (id: -2 rack: null isFenced: false)] 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,045] INFO [RaftManager id=1] Reading KRaft snapshot and 
log as part of the initialization (org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,046] INFO [RaftManager id=1] Loading snapshot 
(OffsetAndEpoch(offset=0, epoch=0)) since log start offset (0) is greater than 
the internal listener's next offset (-1) 
(org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,048] INFO [RaftManager id=1] Latest kraft.version is 
KRAFT_VERSION_1 at offset -1 
(org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,049] INFO [RaftManager id=1] Latest set of voters is 
VoterSet(voters=\{1=VoterNode(voterKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
listeners=Endpoints(endpoints={ListenerName(CONTROLLER)=localhost/<unresolved>:9093}),
 supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:1])}) 
at offset -1 (org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,051] INFO [RaftManager id=1] Latest kraft.version is 
KRAFT_VERSION_1 at offset 1 
(org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,051] INFO [RaftManager id=1] Latest set of voters is 
VoterSet(voters=\{1=VoterNode(voterKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
listeners=Endpoints(endpoints={ListenerName(CONTROLLER)=localhost/<unresolved>:9093}),
 supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:1])}) 
at offset 2 (org.apache.kafka.raft.internals.KRaftControlRecordStateMachine)
[2025-03-27 20:40:55,062] INFO [RaftManager id=1] Starting voters are 
VoterSet(voters=\{1=VoterNode(voterKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
listeners=Endpoints(endpoints={ListenerName(CONTROLLER)=localhost/<unresolved>:9093}),
 supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:1])}) 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,067] INFO [RaftManager id=1] Attempting durable transition 
to ResignedState(localId=1, epoch=3, voters=[1], electionTimeoutMs=1002, 
unackedVoters=[], preferredSuccessors=[]) from null 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,077] INFO [RaftManager id=1] Completed transition to 
ResignedState(localId=1, epoch=3, voters=[1], electionTimeoutMs=1002, 
unackedVoters=[], preferredSuccessors=[]) from null 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,080] INFO [RaftManager id=1] Completed transition to 
ProspectiveState(epoch=3, leaderId=OptionalInt[1], retries=1, 
votedKey=Optional.empty, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), electionTimeoutMs=1397, 
highWatermark=Optional.empty) from ResignedState(localId=1, epoch=3, 
voters=[1], electionTimeoutMs=1002, unackedVoters=[], preferredSuccessors=[]) 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,081] INFO [RaftManager id=1] Attempting durable transition 
to CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, epoch=4, 
retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) from 
ProspectiveState(epoch=3, leaderId=OptionalInt[1], retries=1, 
votedKey=Optional.empty, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), electionTimeoutMs=1397, 
highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,086] INFO [RaftManager id=1] Completed transition to 
CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, epoch=4, 
retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) from 
ProspectiveState(epoch=3, leaderId=OptionalInt[1], retries=1, 
votedKey=Optional.empty, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), electionTimeoutMs=1397, 
highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,087] INFO [RaftManager id=1] Attempting durable transition 
to Leader(localReplicaKey=ReplicaKey(id=1, directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
epoch=4, epochStartOffset=5844, highWatermark=Optional.empty, 
voterStates=\{1=ReplicaState(replicaKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), endOffset=Optional.empty, 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) 
from CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, 
epoch=4, retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,092] INFO [RaftManager id=1] Completed transition to 
Leader(localReplicaKey=ReplicaKey(id=1, directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
epoch=4, epochStartOffset=5844, highWatermark=Optional.empty, 
voterStates=\{1=ReplicaState(replicaKey=ReplicaKey(id=1, 
directoryId=mRnAMtoIReCGVWGS_Q2tnA), endOffset=Optional.empty, 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) 
from CandidateState(localId=1, localDirectoryId=mRnAMtoIReCGVWGS_Q2tnA, 
epoch=4, retries=1, 
epochElection=EpochElection(voterStates=\{1=VoterState(replicaKey=ReplicaKey(id=1,
 directoryId=mRnAMtoIReCGVWGS_Q2tnA), state=GRANTED)}), 
highWatermark=Optional.empty, electionTimeoutMs=1199) 
(org.apache.kafka.raft.QuorumState)
[2025-03-27 20:40:55,099] INFO [kafka-1-raft-io-thread]: Starting 
(org.apache.kafka.raft.KafkaRaftClientDriver)
[2025-03-27 20:40:55,099] INFO [kafka-1-raft-outbound-request-thread]: Starting 
(org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2025-03-27 20:40:55,104] INFO [MetadataLoader id=1] initializeNewPublishers: 
the loader is still catching up because we still don't know the high water mark 
yet. (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,105] INFO [ControllerServer id=1] Waiting for controller 
quorum voters future (kafka.server.ControllerServer)
[2025-03-27 20:40:55,105] INFO [ControllerServer id=1] Finished waiting for 
controller quorum voters future (kafka.server.ControllerServer)
[2025-03-27 20:40:55,107] INFO [RaftManager id=1] High watermark set to 
LogOffsetMetadata(offset=5845, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=421756)]) for 
the first time for epoch 4 based on indexOfHw 0 and voters 
[ReplicaState(replicaKey=ReplicaKey(id=1, directoryId=mRnAMtoIReCGVWGS_Q2tnA), 
endOffset=Optional[LogOffsetMetadata(offset=5845, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=421756)])], 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] 
(org.apache.kafka.raft.LeaderState)
[2025-03-27 20:40:55,118] INFO [RaftManager id=1] Registered the listener 
org.apache.kafka.image.loader.MetadataLoader@1660053996 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,118] INFO [MetadataLoader id=1] 
handleLoadSnapshot(00000000000000000000-0000000000): incrementing 
HandleLoadSnapshotCount to 1. (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,120] INFO [MetadataLoader id=1] 
handleLoadSnapshot(00000000000000000000-0000000000): generated a metadata delta 
between offset -1 and this snapshot in 1409 us. 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,120] INFO [MetadataLoader id=1] 
maybePublishMetadata(SNAPSHOT): The loader is still catching up because we have 
loaded up to offset -1, but the high water mark is 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,120] INFO [MetadataLoader id=1] 
maybePublishMetadata(LOG_DELTA): The loader is still catching up because we 
have loaded up to offset 2, but the high water mark is 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,124] INFO [RaftManager id=1] Registered the listener 
org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@1391447477 
(org.apache.kafka.raft.KafkaRaftClient)
[2025-03-27 20:40:55,127] INFO [controller-1-ThrottledChannelReaper-Fetch]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,127] INFO [controller-1-ThrottledChannelReaper-Produce]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,128] INFO [controller-1-ThrottledChannelReaper-Request]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,128] INFO 
[controller-1-ThrottledChannelReaper-ControllerMutation]: Starting 
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,135] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,143] INFO [ControllerServer id=1] Waiting for the 
controller metadata publishers to be installed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,145] INFO [MetadataLoader id=1] 
maybePublishMetadata(LOG_DELTA): The loader finished catching up to the current 
high water mark of 5845 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing SnapshotGenerator with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [ControllerServer id=1] Finished waiting for the 
controller metadata publishers to be installed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,146] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing KRaftMetadataCachePublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing FeaturesPublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,146] INFO [SocketServer listenerType=CONTROLLER, nodeId=1] 
Enabling request processing. (kafka.network.SocketServer)
[2025-03-27 20:40:55,147] INFO [ControllerServer id=1] Loaded new metadata 
Features(metadataVersion=4.0-IV3, finalizedFeatures=\{group.version=1, 
transaction.version=2, metadata.version=25}, finalizedFeaturesEpoch=5844). 
(org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2025-03-27 20:40:55,147] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ControllerRegistrationsPublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,147] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ControllerRegistrationManager with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,147] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DynamicConfigPublisher controller id=1 with a snapshot at offset 
5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,148] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] Found registration for 
6X6DBx4SR5mUuzRfYNKOjQ instead of our incarnation. 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,148] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DynamicClientQuotaPublisher controller id=1 with a snapshot at 
offset 5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,148] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DynamicTopicClusterQuotaPublisher controller id=1 with a snapshot 
at offset 5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,148] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ScramPublisher controller id=1 with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,149] INFO Awaiting socket connections on 0.0.0.0:9093. 
(kafka.network.DataPlaneAcceptor)
[2025-03-27 20:40:55,149] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing DelegationTokenPublisher controller id=1 with a snapshot at offset 
5844 (org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,149] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing ControllerMetadataMetricsPublisher with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,149] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing AclPublisher controller id=1 with a snapshot at offset 5844 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,153] INFO 
[controller-1-to-controller-registration-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,153] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] initialized channel manager. 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Waiting for all of the 
authorizer futures to be completed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Finished waiting for all 
of the authorizer futures to be completed (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO 
[controller-1-to-controller-registration-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Waiting for all of the 
SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO [ControllerServer id=1] Finished waiting for all 
of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2025-03-27 20:40:55,153] INFO [BrokerServer id=1] Transition from SHUTDOWN to 
STARTING (kafka.server.BrokerServer)
[2025-03-27 20:40:55,153] INFO [BrokerServer id=1] Starting broker 
(kafka.server.BrokerServer)
[2025-03-27 20:40:55,154] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] sendControllerRegistration: attempting to 
send ControllerRegistrationRequestData(controllerId=1, 
incarnationId=JyRNnk_XQx-SK2c6gpcqUw, zkMigrationReady=false, 
listeners=[Listener(name='CONTROLLER', host='localhost', port=9093, 
securityProtocol=0)], features=[Feature(name='group.version', 
minSupportedVersion=0, maxSupportedVersion=1), 
Feature(name='transaction.version', minSupportedVersion=0, 
maxSupportedVersion=2), Feature(name='eligible.leader.replicas.version', 
minSupportedVersion=0, maxSupportedVersion=1), Feature(name='kraft.version', 
minSupportedVersion=0, maxSupportedVersion=1), Feature(name='metadata.version', 
minSupportedVersion=7, maxSupportedVersion=25)]) 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,155] INFO [broker-1-ThrottledChannelReaper-Fetch]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,155] INFO [broker-1-ThrottledChannelReaper-Produce]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,155] INFO [broker-1-ThrottledChannelReaper-Request]: 
Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,155] INFO 
[broker-1-ThrottledChannelReaper-ControllerMutation]: Starting 
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2025-03-27 20:40:55,165] INFO [BrokerServer id=1] Waiting for controller 
quorum voters future (kafka.server.BrokerServer)
[2025-03-27 20:40:55,165] INFO [BrokerServer id=1] Finished waiting for 
controller quorum voters future (kafka.server.BrokerServer)
[2025-03-27 20:40:55,166] INFO 
[broker-1-to-controller-forwarding-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,166] INFO 
[broker-1-to-controller-forwarding-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,169] INFO [client-metrics-reaper]: Starting 
(org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper)
[2025-03-27 20:40:55,176] INFO Updated connection-accept-rate max connection 
creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2025-03-27 20:40:55,177] INFO [SocketServer listenerType=BROKER, nodeId=1] 
Created data-plane acceptor and processors for endpoint : 
ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2025-03-27 20:40:55,179] INFO 
[broker-1-to-controller-alter-partition-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,179] INFO 
[broker-1-to-controller-alter-partition-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,182] INFO 
[broker-1-to-controller-directory-assignments-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,183] INFO 
[broker-1-to-controller-directory-assignments-channel-manager]: Recorded new 
KRaft controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,189] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,189] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,190] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,197] INFO [BrokerServer id=1] Using no op persister 
(kafka.server.BrokerServer)
[2025-03-27 20:40:55,197] INFO [group-coordinator-reaper]: Starting 
(org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper)
[2025-03-27 20:40:55,207] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] Our registration has been persisted to the 
metadata log. (kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,208] INFO [ControllerRegistrationManager id=1 
incarnation=JyRNnk_XQx-SK2c6gpcqUw] RegistrationResponseHandler: controller 
acknowledged ControllerRegistrationRequest. 
(kafka.server.ControllerRegistrationManager)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-0]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-1]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-2]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,208] INFO [group-coordinator-event-processor-3]: Starting 
(org.apache.kafka.coordinator.common.runtime.MultiThreadedEventProcessor$EventProcessorThread)
[2025-03-27 20:40:55,234] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Starting 
(kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,234] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new KRaft 
controller, from now on will use node localhost:9093 (id: 1 rack: null 
isFenced: false) (kafka.server.NodeToControllerRequestThread)
[2025-03-27 20:40:55,235] INFO [BrokerLifecycleManager id=1] Incarnation 
dXYWi31WSr-QyZzpsF7Aww of broker 1 in cluster lnPmGrhbRoq03yW06IptOg is now 
STARTING. (kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,239] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,241] INFO [share-group-lock-timeout-reaper]: Starting 
(org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper)
[2025-03-27 20:40:55,247] INFO [ExpirationReaper-0-null]: Starting 
(org.apache.kafka.server.purgatory.DelayedOperationPurgatory$ExpiredOperationReaper)
[2025-03-27 20:40:55,255] INFO [BrokerServer id=1] Waiting for the broker 
metadata publishers to be installed (kafka.server.BrokerServer)
[2025-03-27 20:40:55,255] INFO [BrokerServer id=1] Finished waiting for the 
broker metadata publishers to be installed (kafka.server.BrokerServer)
[2025-03-27 20:40:55,255] INFO [BrokerServer id=1] Waiting for the controller 
to acknowledge that we are caught up (kafka.server.BrokerServer)
[2025-03-27 20:40:55,255] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing MetadataVersionPublisher(id=1) with a snapshot at offset 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,255] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing BrokerMetadataPublisher with a snapshot at offset 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,255] INFO [BrokerMetadataPublisher id=1] Publishing 
initial metadata at offset OffsetAndEpoch(offset=5845, epoch=4) with 
metadata.version Optional[4.0-IV3]. 
(kafka.server.metadata.BrokerMetadataPublisher)
[2025-03-27 20:40:55,256] INFO Loading logs from log dirs 
ArrayBuffer(/tmp/kraft-combined-logs) (kafka.log.LogManager)
[2025-03-27 20:40:55,257] INFO No logs found to be loaded in 
/tmp/kraft-combined-logs (kafka.log.LogManager)
[2025-03-27 20:40:55,260] INFO Loaded 0 logs in 4ms (kafka.log.LogManager)
[2025-03-27 20:40:55,260] INFO Starting log cleanup with a period of 300000 ms. 
(kafka.log.LogManager)
[2025-03-27 20:40:55,261] INFO Starting log flusher with a default period of 
9223372036854775807 ms. (kafka.log.LogManager)
[2025-03-27 20:40:55,276] INFO [kafka-log-cleaner-thread-0]: Starting 
(kafka.log.LogCleaner$CleanerThread)
[2025-03-27 20:40:55,277] INFO [LogDirFailureHandler]: Starting 
(kafka.server.ReplicaManager$LogDirFailureHandler)
[2025-03-27 20:40:55,277] INFO [AddPartitionsToTxnSenderThread-1]: Starting 
(kafka.server.AddPartitionsToTxnManager)
[2025-03-27 20:40:55,278] INFO [GroupCoordinator id=1] Starting up. 
(org.apache.kafka.coordinator.group.GroupCoordinatorService)
[2025-03-27 20:40:55,278] INFO [GroupCoordinator id=1] Startup complete. 
(org.apache.kafka.coordinator.group.GroupCoordinatorService)
[2025-03-27 20:40:55,279] INFO [TransactionCoordinator id=1] Starting up. 
(kafka.coordinator.transaction.TransactionCoordinator)
[2025-03-27 20:40:55,279] INFO [TxnMarkerSenderThread-1]: Starting 
(kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2025-03-27 20:40:55,280] INFO [TransactionCoordinator id=1] Startup complete. 
(kafka.coordinator.transaction.TransactionCoordinator)
[2025-03-27 20:40:55,283] INFO [MetadataLoader id=1] InitializeNewPublishers: 
initializing BrokerRegistrationTracker(id=1) with a snapshot at offset 5845 
(org.apache.kafka.image.loader.MetadataLoader)
[2025-03-27 20:40:55,347] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,552] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:55,959] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:56,777] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:40:58,391] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:01,604] INFO [BrokerLifecycleManager id=1] Unable to register 
broker 1 because the controller returned error DUPLICATE_BROKER_REGISTRATION 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,077] INFO [BrokerLifecycleManager id=1] Successfully 
registered broker 1 with broker epoch 5868 (kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,087] INFO [BrokerLifecycleManager id=1] The broker has 
caught up. Transitioning from STARTING to RECOVERY. 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,088] INFO [BrokerServer id=1] Finished waiting for the 
controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2025-03-27 20:41:06,088] INFO [BrokerServer id=1] Waiting for the initial 
broker metadata update to be published (kafka.server.BrokerServer)
[2025-03-27 20:41:06,088] INFO [BrokerServer id=1] Finished waiting for the 
initial broker metadata update to be published (kafka.server.BrokerServer)
[2025-03-27 20:41:06,089] INFO KafkaConfig values:
    add.partitions.to.txn.retry.backoff.max.ms = 100
    add.partitions.to.txn.retry.backoff.ms = 20
    advertised.listeners = 
PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.heartbeat.interval.ms = 2000
    broker.id = 1
    broker.rack = null
    broker.session.timeout.ms = 9000
    client.quota.callback.class = null
    compression.gzip.level = -1
    compression.lz4.level = 9
    compression.type = producer
    compression.zstd.level = 3
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    connections.max.reauth.ms = 0
    controlled.shutdown.enable = true
    controller.listener.names = CONTROLLER
    controller.performance.always.log.threshold.ms = 2000
    controller.performance.sample.period.ms = 60000
    controller.quorum.append.linger.ms = 25
    controller.quorum.bootstrap.servers = [localhost:9093]
    controller.quorum.election.backoff.max.ms = 1000
    controller.quorum.election.timeout.ms = 1000
    controller.quorum.fetch.timeout.ms = 2000
    controller.quorum.request.timeout.ms = 2000
    controller.quorum.retry.backoff.ms = 20
    controller.quorum.voters = []
    controller.quota.window.num = 11
    controller.quota.window.size.seconds = 1
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.max.lifetime.ms = 604800000
    delegation.token.secret.key = null
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    early.start.listeners = null
    fetch.max.bytes = 57671680
    fetch.purgatory.purge.interval.requests = 1000
    group.consumer.assignors = [uniform, range]
    group.consumer.heartbeat.interval.ms = 5000
    group.consumer.max.heartbeat.interval.ms = 15000
    group.consumer.max.session.timeout.ms = 60000
    group.consumer.max.size = 2147483647
    group.consumer.migration.policy = bidirectional
    group.consumer.min.heartbeat.interval.ms = 5000
    group.consumer.min.session.timeout.ms = 45000
    group.consumer.session.timeout.ms = 45000
    group.coordinator.append.linger.ms = 5
    group.coordinator.new.enable = true
    group.coordinator.rebalance.protocols = [classic, consumer]
    group.coordinator.threads = 4
    group.initial.rebalance.delay.ms = 3000
    group.max.session.timeout.ms = 1800000
    group.max.size = 2147483647
    group.min.session.timeout.ms = 6000
    group.share.delivery.count.limit = 5
    group.share.enable = false
    group.share.heartbeat.interval.ms = 5000
    group.share.max.groups = 10
    group.share.max.heartbeat.interval.ms = 15000
    group.share.max.record.lock.duration.ms = 60000
    group.share.max.session.timeout.ms = 60000
    group.share.max.size = 200
    group.share.min.heartbeat.interval.ms = 5000
    group.share.min.record.lock.duration.ms = 15000
    group.share.min.session.timeout.ms = 45000
    group.share.partition.max.record.locks = 200
    group.share.persister.class.name = 
org.apache.kafka.server.share.persister.DefaultStatePersister
    group.share.record.lock.duration.ms = 30000
    group.share.session.timeout.ms = 45000
    initial.broker.registration.timeout.ms = 60000
    inter.broker.listener.name = PLAINTEXT
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    listener.security.protocol.map = 
CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    listeners = PLAINTEXT://:9092,CONTROLLER://:9093
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.max.compaction.lag.ms = 9223372036854775807
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dir.failure.timeout.ms = 30000
    log.dirs = /tmp/kraft-combined-logs
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.initial.task.delay.ms = 30000
    log.local.retention.bytes = -2
    log.local.retention.ms = -2
    log.message.timestamp.after.max.ms = 3600000
    log.message.timestamp.before.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connection.creation.rate = 2147483647
    max.connections = 2147483647
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    max.incremental.fetch.session.cache.slots = 1000
    max.request.partition.size.limit = 2000
    message.max.bytes = 1048588
    metadata.log.dir = null
    metadata.log.max.record.bytes.between.snapshots = 20971520
    metadata.log.max.snapshot.interval.ms = 3600000
    metadata.log.segment.bytes = 1073741824
    metadata.log.segment.min.bytes = 8388608
    metadata.log.segment.ms = 604800000
    metadata.max.idle.interval.ms = 500
    metadata.max.retention.bytes = 104857600
    metadata.max.retention.ms = 604800000
    metric.reporters = [org.apache.kafka.common.metrics.JmxReporter]
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    node.id = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    principal.builder.class = class 
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
    process.roles = [broker, controller]
    producer.id.expiration.check.interval.ms = 600000
    producer.id.expiration.ms = 86400000
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.window.num = 11
    quota.window.size.seconds = 1
    remote.fetch.max.wait.ms = 500
    remote.list.offsets.request.timeout.ms = 30000
    remote.log.index.file.cache.total.size.bytes = 1073741824
    remote.log.manager.copier.thread.pool.size = 10
    remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
    remote.log.manager.copy.quota.window.num = 11
    remote.log.manager.copy.quota.window.size.seconds = 1
    remote.log.manager.expiration.thread.pool.size = 10
    remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
    remote.log.manager.fetch.quota.window.num = 11
    remote.log.manager.fetch.quota.window.size.seconds = 1
    remote.log.manager.task.interval.ms = 30000
    remote.log.manager.task.retry.backoff.max.ms = 30000
    remote.log.manager.task.retry.backoff.ms = 500
    remote.log.manager.task.retry.jitter = 0.2
    remote.log.manager.thread.pool.size = 2
    remote.log.metadata.custom.metadata.max.bytes = 128
    remote.log.metadata.manager.class.name = 
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
    remote.log.metadata.manager.class.path = null
    remote.log.metadata.manager.impl.prefix = rlmm.config.
    remote.log.metadata.manager.listener.name = null
    remote.log.reader.max.pending.tasks = 100
    remote.log.reader.threads = 10
    remote.log.storage.manager.class.name = null
    remote.log.storage.manager.class.path = null
    remote.log.storage.manager.impl.prefix = rsm.config.
    remote.log.storage.system.enable = false
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 30000
    replica.selector.class = null
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = [GSSAPI]
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.connect.timeout.ms = null
    sasl.login.read.timeout.ms = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.login.retry.backoff.max.ms = 10000
    sasl.login.retry.backoff.ms = 100
    sasl.mechanism.controller.protocol = GSSAPI
    sasl.mechanism.inter.broker.protocol = GSSAPI
    sasl.oauthbearer.clock.skew.seconds = 30
    sasl.oauthbearer.expected.audience = null
    sasl.oauthbearer.expected.issuer = null
    sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
    sasl.oauthbearer.jwks.endpoint.url = null
    sasl.oauthbearer.scope.claim.name = scope
    sasl.oauthbearer.sub.claim.name = sub
    sasl.oauthbearer.token.endpoint.url = null
    sasl.server.callback.handler.class = null
    sasl.server.max.receive.size = 524288
    security.inter.broker.protocol = PLAINTEXT
    security.providers = null
    server.max.startup.time.ms = 9223372036854775807
    share.coordinator.append.linger.ms = 10
    share.coordinator.load.buffer.size = 5242880
    share.coordinator.snapshot.update.records.per.snapshot = 500
    share.coordinator.state.topic.compression.codec = 0
    share.coordinator.state.topic.min.isr = 1
    share.coordinator.state.topic.num.partitions = 50
    share.coordinator.state.topic.prune.interval.ms = 300000
    share.coordinator.state.topic.replication.factor = 1
    share.coordinator.state.topic.segment.bytes = 104857600
    share.coordinator.threads = 1
    share.coordinator.write.timeout.ms = 5000
    share.fetch.max.fetch.records = 2147483647
    share.fetch.purgatory.purge.interval.requests = 1000
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    socket.listen.backlog.size = 50
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.allow.dn.changes = false
    ssl.allow.san.changes = false
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.principal.mapping.rules = DEFAULT
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    telemetry.max.bytes = 1048576
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
    transaction.max.timeout.ms = 900000
    transaction.partition.verification.enable = true
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 1
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    unclean.leader.election.interval.ms = 300000
    unstable.api.versions.enable = false
    unstable.feature.versions.enable = false
 (org.apache.kafka.common.config.AbstractConfig)
[2025-03-27 20:41:06,094] INFO [BrokerServer id=1] Waiting for the broker to be 
unfenced (kafka.server.BrokerServer)
[2025-03-27 20:41:06,127] INFO [BrokerLifecycleManager id=1] The broker has 
been unfenced. Transitioning from RECOVERY to RUNNING. 
(kafka.server.BrokerLifecycleManager)
[2025-03-27 20:41:06,131] INFO [BrokerServer id=1] Finished waiting for the 
broker to be unfenced (kafka.server.BrokerServer)
[2025-03-27 20:41:06,131] INFO authorizerStart completed for endpoint 
PLAINTEXT. Endpoint is now READY. 
(org.apache.kafka.server.network.EndpointReadyFutures)
[2025-03-27 20:41:06,131] INFO [SocketServer listenerType=BROKER, nodeId=1] 
Enabling request processing. (kafka.network.SocketServer)
[2025-03-27 20:41:06,132] INFO Awaiting socket connections on 0.0.0.0:9092. 
(kafka.network.DataPlaneAcceptor)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Waiting for all of the 
authorizer futures to be completed (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Finished waiting for all of 
the authorizer futures to be completed (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Waiting for all of the 
SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Finished waiting for all of 
the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2025-03-27 20:41:06,132] INFO [BrokerServer id=1] Transition from STARTING to 
STARTED (kafka.server.BrokerServer)
[2025-03-27 20:41:06,133] INFO Kafka version: 4.0.0 
(org.apache.kafka.common.utils.AppInfoParser)
[2025-03-27 20:41:06,133] INFO Kafka commitId: 985bc99521dd22bb 
(org.apache.kafka.common.utils.AppInfoParser)
[2025-03-27 20:41:06,133] INFO Kafka startTimeMs: 1743104466132 
(org.apache.kafka.common.utils.AppInfoParser)
[2025-03-27 20:41:06,134] INFO [KafkaRaftServer nodeId=1] Kafka Server started 
(kafka.server.KafkaRaftServer)
```

> Broker registrations are slow if previously fenced or shutdown
> --------------------------------------------------------------
>
>                 Key: KAFKA-19047
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19047
>             Project: Kafka
>          Issue Type: Improvement
>    Affects Versions: 4.0.0
>            Reporter: Alyssa Huang
>            Assignee: Alyssa Huang
>            Priority: Major
>
> BrokerLifecycleManager prevents registration of a broker w/ an id it has seen 
> before with a different incarnation id if the broker session expires. On 
> clean shutdown and restart of a broker this can cause an unnecessary delay in 
> re-registration while the quorum controller waits for the session to expire.
> ```
> [BrokerLifecycleManager id=1] Unable to register broker 1 because the 
> controller returned error DUPLICATE_BROKER_REGISTRATION
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to