Hello Brian
It was the most basic Azure vm instance (1 vCPU and 0.5 GB RAM) running
Ubuntu Server 20 LTS.

Good point about the 'dmesg' . I forgot to check that log.
I tried again using a VM with 0.5GB and JVM Heap size of 256MB and then
with 400MB and you were right in 'dmesg' logs *I could see the error*:

[  929.961137]
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/session-3.scope,task=java,pid=7092,uid=1000

[  929.961233] *Out of memory**: Killed process 7092 (java)*
total-vm:2591296kB, anon-rss:177320kB, file-rss:708kB, shmem-rss:0kB,
UID:1000 pgtables:704kB oom_score_adj:0

Seeing this log with this message brings me more relief and as you I am
also curious about the memory requirements at least for the minimum to do
some development task.
Maybe It's a good idea to indicate the minimum memory required in the
quickstart 😅 as I follow the official quickstart trying with 0.5GB of RAM
but looks like the needed is more (1GB of physical with some less number in
the heap configuration for the JVM...700MB worked for me)

Thanks for your assistance Brian!!

Angel


More of the dmesg output:

[   49.949438] hv_balloon: Max. dynamic memory size: 512 MB

[  611.586915] python3 invoked oom-killer*:
gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0*

[  611.586924] CPU*: 0 PID: 794 Comm: python3 Not tainted 5.15.0-1033-azure
#40~20.04.1-Ubuntu*

[  611.586927] Hardware name*: Microsoft Corporation Virtual
Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 05/09/2022*

[  611.586929] *Call Trace:*

*...*

*...*

*...*

[  611.587077] lowmem_reserve[]*: 0 379 379 379 379*

[  611.587080] *Node 0 DMA32 free:3244kB min:7924kB low:8532kB high:9140kB
reserved_highatomic:0KB active_anon:2440kB inactive_anon:270216kB
active_file:120kB inactive_file:84kB unevictable:21740kB writepending:0kB
present:507144kB managed:397100kB mlocked:18668kB bounce:0kB free_pcp:448kB
local_pcp:448kB free_cma:0kB*

[  611.587085] lowmem_reserve[]*: 0 0 0 0 0*

[  611.587087] Node 0 DMA*: 3*4kB (U) 9*8kB (UM) 13*16kB (UM) 3*32kB (U)
1*64kB (M) 1*128kB (M) 0*256kB 0*512kB 1*1024kB (M) 0*2048kB 0*4096kB =
1604kB*

[  611.587099] Node 0 DMA32*: 72*4kB (ME) 74*8kB (UME) 85*16kB (UME)
32*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB =
3264kB*

[  611.587108] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0
hugepages_size=1048576kB

[  611.587110] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0
hugepages_size=2048kB

[  611.587111] *3128 total pagecache pages*

[  611.587112] *0 pages in swap cache*

[  611.587113] Swap cache stats*: add 0, delete 0, find 0/0*

[  611.587114] *Free swap  = 0kB*

[  611.587115] *Total swap = 0kB*

[  611.587115] *130785 pages RAM*

[  611.587116] *0 pages HighMem/MovableOnly*

[  611.587117] *27670 pages reserved*

[  611.587117] *0 pages hwpoisoned*

[  611.587118] Tasks state (memory values in pages):

[  611.587119] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes
swapents oom_score_adj name

[  611.587122] [    166]     0   166    13039      927    90112        0
      -250 systemd-journal

[  611.587125] [    202]     0   202     2453      982    57344        0
      -1000 systemd-udevd

[  611.587127] [    293]     0   293     1032      669    49152        0
          0 hv_kvp_daemon

[  611.587129] [    423]     0   423    70085     4501    90112        0
      -1000 multipathd

[  611.587131] [    553]   100   553     6851      873    86016        0
          0 systemd-network

[  611.587133] [    558]   101   558     6105     1734    86016        0
          0 systemd-resolve

[  611.587135] [    754]     0   754    60265      836   102400        0
          0 accounts-daemon

[  611.587137] [    762]     0   762     2137      559    57344        0
          0 cron

[  611.587139] [    764]   103   764     1897      942    57344        0
      -900 dbus-daemon

[  611.587141] [    768]   113   768     1207      393    49152        0
          0 chronyd

[  611.587143] [    769]   113   769     1174       45    45056        0
          0 chronyd

[  611.587145] [    777]     0   777     7469     2854    90112        0
          0 networkd-dispat

[  611.587147] [    780]     0   780    59112      699    98304        0
          0 polkitd

[  611.587149] [    788]   104   788    56125      931    81920        0
          0 rsyslogd

[  611.587150] [    790]     0   790   184017     4107   262144        0
      -900 snapd

[  611.587152] [    791]     0   791     4348      963    69632        0
          0 systemd-logind

[  611.587154] [    793]     0   793    98907     1318   135168        0
          0 udisksd

[  611.587156] [    794]     0   794     7374     3733    98304        0
          0 python3

[  611.587158] [    795]     0   795      951      499    45056        0
          0 atd

[  611.587160] [    872]     0   872    79708     1241   118784        0
          0 ModemManager

[  611.587162] [    873]     0   873    27032     2724   106496        0
          0 unattended-upgr

[  611.587164] [    880]     0   880     1840      417    53248        0
          0 agetty

[  611.587165] [    904]     0   904     1459      387    45056        0
          0 agetty

[  611.587167] [    980]     0   980      624      145    49152        0
          0 bpfilter_umh

[  611.587169] [   1420]     0  1420     3047      799    65536        0
      -1000 sshd

[  611.587171] [   1464]     0  1464   100560     4742   151552        0
          0 python3

[  611.587173] [   1625]     0  1625     3485      984    69632        0
          0 sshd

[  611.587175] [   1628]  1000  1628     4797     1112    77824        0
          0 systemd

[  611.587177] [   1629]  1000  1629    26367     1180    94208        0
          0 (sd-pam)

[  611.587178] [   1745]  1000  1745     3518      634    69632        0
          0 sshd

[  611.587180] [   1746]  1000  1746     2547      958    57344        0
          0 bash

[  611.587181] [   2120]     0  2120    70957      685   139264        0
          0 packagekitd

[  611.587183] [   2177]     0  2177     3483     1051    73728        0
          0 sshd

[  611.587185] [   2250]  1000  2250     3517      774    73728        0
          0 sshd

[  611.587187] [   2251]  1000  2251     2628     1085    57344        0
          0 bash

[  611.587188] [   5634]  1000  5634     1818       17    53248        0
          0 tail

[  611.587191] [   6662]  1000  6662   610502    44038   712704        0
          0 java

[  611.587192]
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/session-3.scope,task=java,pid=6662,uid=1000

[  611.587278] Out of memory: Killed process 6662 (java)
total-vm:2442008kB, anon-rss:176152kB, file-rss:0kB, shmem-rss:0kB,
UID:1000 pgtables:696kB oom_score_adj:0

[  611.628039] ExpirationReape*: page allocation failure: order:0,
mode:0x1100cca(GFP_HIGHUSER_MOVABLE),
nodemask=(null),cpuset=/,mems_allowed=0*

[  611.628052] CPU*: 0 PID: 7047 Comm: ExpirationReape Not tainted
5.15.0-1033-azure #40~20.04.1-Ubuntu*

[  611.628055] Hardware name*: Microsoft Corporation Virtual
Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 05/09/2022*





On Wed, Feb 15, 2023 at 11:28 PM Brian @ Rickabaugh.Net <
br...@rickabaugh.net> wrote:

> Angel,
>
> What OS are you running on the VM?  Some Linux variant?
>
> The Linux kernel will get aggressive with processes that get really greedy
> with resources, especially memory.  You might check your ‘dmesg’ logs to
> look for reports of the kernel asserting itself.  It will summarily kill
> user processes if it deems it necessary to keep the OS stable.
>
> I’d be curious what the group thinks are the memory requirements for the
> heap but I suspect it is less that 700 MB if you’re just running some
> light, experimental code.
>
> Brian
>
> > On Feb 15, 2023, at 10:52 PM, Angel Motta <angelmo...@gmail.com> wrote:
> >
> > I tried with another virtual machine with 1GB of RAM and the jvm heap
> size
> > setting to 700MB and It worked! Kafka started successfully.
> > It looks like the default setting of 1GB heap size in the
> server.properties
> > is the minimum? or near the minimum right?
> >
> > I wanted the minimum resources (memory) because this is for development
> > purposes so I think I will have to consider this as the minimum.
> > I wanted to try the vanilla installation to get a little bit familiar
> > before trying the docker mode installation and I hope it works well :)
> >
> > Any comment about it is welcomed. I couldn't find any log indicating the
> > cause of the kill but I think It was an out of memory error so I share my
> > experience for anyone having similar issues.
> >
> >
> >> On Wed, Feb 15, 2023 at 9:58 PM Angel Motta <angelmo...@gmail.com>
> wrote:
> >>
> >> I deleted the content of the log directory and format again as is
> >> indicated in the quickstart but same problem. Kafka process is killed
> >> during startup in the same way.
> >>
> >>
> >> On Wed, Feb 15, 2023 at 9:11 PM sunil chaudhari <
> >> sunilmchaudhar...@gmail.com> wrote:
> >>
> >>> Ok sorry to misinterpret your message.
> >>> However if you  are learning and its your first time, you can clear the
> >>> data from the logs directory defined in server.properties.
> >>> And start fresh instance and then see what happens.
> >>> I hope you have followed all documents properly.
> >>>
> >>>> On Thu, 16 Feb 2023 at 7:07 AM, Angel Motta <angelmo...@gmail.com>
> wrote:
> >>>
> >>>> Hello Sunil
> >>>> Thanks for your message. Only to clarify that I never could
> successfully
> >>>> start Kafka. This my first attempt to start Kafka and I get the
> "killed"
> >>>> message as shown in the log.
> >>>> Knowing this fact is necessary to remove all data and logs? If this
> >>> helps,
> >>>> could you tell me how to do this?
> >>>>
> >>>>
> >>>>
> >>>> On Wed, Feb 15, 2023 at 8:29 PM sunil chaudhari <
> >>>> sunilmchaudhar...@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> Remove all data and logs.
> >>>>> And start it.
> >>>>> Next time when you want to stop then dont kill the process with kill
> >>>>> command.
> >>>>> Stop it gracefully using kafka-server-stop under /bin
> >>>>> Kafka needs stop signal to do some cleanup operations before it
> >>> stops. So
> >>>>> kill is not the option.
> >>>>>
> >>>>> On Thu, 16 Feb 2023 at 6:49 AM, Angel Motta <angelmo...@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>>> Hello everyone!
> >>>>>> New Kafka user here. I am learning Kafka and trying to run it for
> >>> the
> >>>>>> first time (part of my CS undergrad thesis)
> >>>>>> I have the minimum server in AWS EC2 with 500MB of RAM and whenever
> >>> I
> >>>> try
> >>>>>> to start Kafka (kraft mode) I get "killed" as a last message.
> >>>>>>
> >>>>>> First I had to change the heap size from default (1GB) because I
> >>>> received
> >>>>>> an error about it so now It is 256M and now I see the logs of
> >>> startup
> >>>>>> process but the process is killed.
> >>>>>>
> >>>>>> How can I know the cause of this error?
> >>>>>> I paste the last part of the log here. I just followed the
> >>> quickstart
> >>>>>> https://kafka.apache.org/quickstart
> >>>>>> I also quickly saw the log directory (server, controller) but any
> >>> hint
> >>>> to
> >>>>>> me.
> >>>>>>
> >>>>>> Thanks in advance for your assistance.
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,082] INFO [LogLoader
> >>>> partition=__cluster_metadata-0,
> >>>>>> dir=/tmp/kraft-combined-logs] Producer state recovery took 3ms for
> >>>>> snapshot
> >>>>>> load and 0ms for segment recovery from offset 28
> >>>> (kafka.log.UnifiedLog$)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,131] INFO Initialized snapshots with IDs
> >>>> SortedSet()
> >>>>>> from /tmp/kraft-combined-logs/__cluster_metadata-0
> >>>>>> (kafka.raft.KafkaMetadataLog$)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,183] INFO [raft-expiration-reaper]: Starting
> >>>>>> (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,523] INFO [RaftManager nodeId=1] Completed
> >>>>> transition
> >>>>>> to ResignedState(localId=1, epoch=9, voters=[1],
> >>>> electionTimeoutMs=1695,
> >>>>>> unackedVoters=[], preferredSuccessors=[])
> >>>>>> (org.apache.kafka.raft.QuorumState)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,533] INFO [RaftManager nodeId=1] Completed
> >>>>> transition
> >>>>>> to CandidateState(localId=1, epoch=10, retries=1,
> >>>> electionTimeoutMs=1539)
> >>>>>> (org.apache.kafka.raft.QuorumState)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,549] INFO [RaftManager nodeId=1] Completed
> >>>>> transition
> >>>>>> to Leader(localId=1, epoch=10, epochStartOffset=28,
> >>>>>> highWatermark=Optional.empty, voterStates={1=ReplicaState(nodeId=1,
> >>>>>> endOffset=Optional.empty, lastFetchTimestamp=-1,
> >>>>> lastCaughtUpTimestamp=-1,
> >>>>>> hasAcknowledgedLeader=true)}) (org.apache.kafka.raft.QuorumState)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,615] INFO [kafka-raft-outbound-request-thread]:
> >>>>>> Starting (kafka.raft.RaftSendThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,620] INFO [kafka-raft-io-thread]: Starting
> >>>>>> (kafka.raft.KafkaRaftManager$RaftIoThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,694] INFO [RaftManager nodeId=1] High watermark
> >>>> set
> >>>>> to
> >>>>>> LogOffsetMetadata(offset=29,
> >>>>>>
> >>>>
> metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=3044)])
> >>>>>> for the first time for epoch 10 based on indexOfHw 0 and voters
> >>>>>> [ReplicaState(nodeId=1,
> >>> endOffset=Optional[LogOffsetMetadata(offset=29,
> >>>>>>
> >>>>>
> >>>>
> >>>
> metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=3044)])],
> >>>>>> lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1,
> >>>>>> hasAcknowledgedLeader=true)] (org.apache.kafka.raft.LeaderState)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,723] INFO [RaftManager nodeId=1] Registered the
> >>>>>> listener org.apache.kafka.image.loader.MetadataLoader@1633422834
> >>>>>> (org.apache.kafka.raft.KafkaRaftClient)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,761] INFO [RaftManager nodeId=1] Registered the
> >>>>>> listener
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@629125780
> >>>>>> (org.apache.kafka.raft.KafkaRaftClient)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,780] INFO [MetadataLoader 1] Publishing initial
> >>>>>> snapshot at offset 27 to SnapshotGenerator
> >>>>>> (org.apache.kafka.image.loader.MetadataLoader)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,798] INFO [ThrottledChannelReaper-Fetch]:
> >>> Starting
> >>>>>> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,799] INFO [ThrottledChannelReaper-Produce]:
> >>>> Starting
> >>>>>> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,808] INFO [ThrottledChannelReaper-Request]:
> >>>> Starting
> >>>>>> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,813] INFO
> >>>>> [ThrottledChannelReaper-ControllerMutation]:
> >>>>>> Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,849] INFO [ExpirationReaper-1-AlterAcls]:
> >>> Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,865] INFO [SocketServer
> >>> listenerType=CONTROLLER,
> >>>>>> nodeId=1] Enabling request processing. (kafka.network.SocketServer)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,874] INFO [BrokerServer id=1] Transition from
> >>>>> SHUTDOWN
> >>>>>> to STARTING (kafka.server.BrokerServer)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,877] INFO [BrokerServer id=1] Starting broker
> >>>>>> (kafka.server.BrokerServer)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,918] INFO [ThrottledChannelReaper-Fetch]:
> >>> Starting
> >>>>>> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,918] INFO [ThrottledChannelReaper-Produce]:
> >>>> Starting
> >>>>>> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,919] INFO [ThrottledChannelReaper-Request]:
> >>>> Starting
> >>>>>> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,919] INFO
> >>>>> [ThrottledChannelReaper-ControllerMutation]:
> >>>>>> Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,957] INFO [BrokerToControllerChannelManager
> >>>> broker=1
> >>>>>> name=forwarding]: Starting
> >>>> (kafka.server.BrokerToControllerRequestThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:54,972] INFO [BrokerToControllerChannelManager
> >>>> broker=1
> >>>>>> name=forwarding]: Recorded new controller, from now on will use node
> >>>>>> localhost:9093 (id: 1 rack: null)
> >>>>>> (kafka.server.BrokerToControllerRequestThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,059] INFO Updated connection-accept-rate max
> >>>>>> connection creation rate to 2147483647
> >>> (kafka.network.ConnectionQuotas)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,065] INFO Awaiting socket connections on
> >>>>> 0.0.0.0:9092
> >>>>>> .
> >>>>>> (kafka.network.DataPlaneAcceptor)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,074] INFO [SocketServer listenerType=BROKER,
> >>>>> nodeId=1]
> >>>>>> Created data-plane acceptor and processors for endpoint :
> >>>>>> ListenerName(PLAINTEXT) (kafka.network.SocketServer)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,082] INFO [BrokerToControllerChannelManager
> >>>> broker=1
> >>>>>> name=alterPartition]: Starting
> >>>>>> (kafka.server.BrokerToControllerRequestThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,082] INFO [BrokerToControllerChannelManager
> >>>> broker=1
> >>>>>> name=alterPartition]: Recorded new controller, from now on will use
> >>>> node
> >>>>>> localhost:9093 (id: 1 rack: null)
> >>>>>> (kafka.server.BrokerToControllerRequestThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,101] INFO [ExpirationReaper-1-Produce]:
> >>> Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,101] INFO [ExpirationReaper-1-Fetch]: Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,101] INFO [ExpirationReaper-1-DeleteRecords]:
> >>>>> Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,112] INFO [ExpirationReaper-1-ElectLeader]:
> >>>> Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,152] INFO [ExpirationReaper-1-Heartbeat]:
> >>> Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,153] INFO [ExpirationReaper-1-Rebalance]:
> >>> Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,282] INFO [RaftManager nodeId=1] Registered the
> >>>>>> listener kafka.server.metadata.BrokerMetadataListener@1398770581
> >>>>>> (org.apache.kafka.raft.KafkaRaftClient)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,287] INFO [BrokerToControllerChannelManager
> >>>> broker=1
> >>>>>> name=heartbeat]: Starting
> >>>> (kafka.server.BrokerToControllerRequestThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,287] INFO [BrokerToControllerChannelManager
> >>>> broker=1
> >>>>>> name=heartbeat]: Recorded new controller, from now on will use node
> >>>>>> localhost:9093 (id: 1 rack: null)
> >>>>>> (kafka.server.BrokerToControllerRequestThread)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,301] INFO [BrokerLifecycleManager id=1]
> >>>> Incarnation
> >>>>>> K3MwUk8zQUGgIXaeg0yY_w of broker 1 in cluster
> >>> zodne5JyThyBUzTtQnIADg is
> >>>>> now
> >>>>>> STARTING. (kafka.server.BrokerLifecycleManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,393] INFO [ExpirationReaper-1-AlterAcls]:
> >>> Starting
> >>>>>> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,405] INFO [BrokerServer id=1] Waiting for
> >>> broker
> >>>>>> metadata to catch up. (kafka.server.BrokerServer)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,537] INFO [BrokerLifecycleManager id=1]
> >>>> Successfully
> >>>>>> registered broker 1 with broker epoch 30
> >>>>>> (kafka.server.BrokerLifecycleManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,546] INFO [BrokerLifecycleManager id=1] The
> >>> broker
> >>>>> has
> >>>>>> caught up. Transitioning from STARTING to RECOVERY.
> >>>>>> (kafka.server.BrokerLifecycleManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,549] INFO [BrokerMetadataListener id=1]
> >>> Starting
> >>>> to
> >>>>>> publish metadata events at offset 30.
> >>>>>> (kafka.server.metadata.BrokerMetadataListener)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,552] INFO [BrokerMetadataPublisher id=1]
> >>>> Publishing
> >>>>>> initial metadata at offset OffsetAndEpoch(offset=30, epoch=10) with
> >>>>>> metadata.version 3.4-IV0.
> >>>> (kafka.server.metadata.BrokerMetadataPublisher)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,553] INFO Loading logs from log dirs
> >>>>>> ArraySeq(/tmp/kraft-combined-logs) (kafka.log.LogManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,563] INFO [BrokerLifecycleManager id=1] The
> >>> broker
> >>>>> is
> >>>>>> in RECOVERY. (kafka.server.BrokerLifecycleManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,565] INFO Attempting recovery for all logs in
> >>>>>> /tmp/kraft-combined-logs since no clean shutdown file was found
> >>>>>> (kafka.log.LogManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,591] INFO Loaded 0 logs in 38ms.
> >>>>>> (kafka.log.LogManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,594] INFO Starting log cleanup with a period of
> >>>>> 300000
> >>>>>> ms. (kafka.log.LogManager)
> >>>>>>
> >>>>>> [2023-02-16 01:00:55,599] INFO Starting log flusher with a default
> >>>> period
> >>>>>> of 9223372036854775807 ms. (kafka.log.LogManager)
> >>>>>>
> >>>>>> Killed
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
>
>

Reply via email to