Hi Sebastian,

Killing implies that something external is killing the process. Kafka's
default min heap size is 1G, you may be running under an environment that
limits processes to less than this and thus invoking the Linux OOMKiller or
similar, what is the output of `free -m` and `ulimit -a` on the same box?

Kind regards,

LIam Clarke-Hutchinson

On Tue, Apr 21, 2020 at 7:28 PM <i...@fluent-software.de> wrote:

> Zookepper: /home/kafka/kafka/bin/zookeeper-server-start.sh
> /home/kafka/kafka/config/zookeeper.properties
> Kafka: /home/kafka/kafka/bin/kafka-server-start.sh
> /home/kafka/kafka/config/server.properties
>
> As described here:
> https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-debian-9
> But note that I am not using the service at the moment, I will start
> zookeeper and kafka for debugging purpose manually.
>
> Kind regards,
> Sebastian
>
>
> -----Ursprüngliche Nachricht-----
> Von: JOHN, BIBIN <bj9...@att.com>
> Gesendet: Montag, 20. April 2020 19:04
> An: users@kafka.apache.org
> Betreff: RE: Help with setting up Kafka Node
>
> How are you starting it?
>
> -----Original Message-----
> From: i...@fluent-software.de <i...@fluent-software.de>
> Sent: Monday, April 20, 2020 11:56 AM
> To: users@kafka.apache.org
> Subject: AW: Help with setting up Kafka Node
>
> I can't see any errors. Changing the debug level to debug I got the
> following log:
>
> [2020-04-20 18:25:28,956] DEBUG Reading reply sessionid:0x10868ec81a70001,
> packet:: clientPath:/config/brokers/0 serverPath:/config/brokers/0
> finished:false h
>  eader:: 16,4  replyHeader:: 16,38,-101  request:: '/config/brokers/0,F
> response::   (org.apache.zookeeper.ClientCnxn)
> [2020-04-20 18:25:28,963] INFO KafkaConfig values:
>         advertised.host.name = null
>         advertised.listeners = null
>         advertised.port = null
>         alter.config.policy.class.name = null
>         alter.log.dirs.replication.quota.window.num = 11
>         alter.log.dirs.replication.quota.window.size.seconds = 1
>         authorizer.class.name =
>         auto.create.topics.enable = true
>         auto.leader.rebalance.enable = true
>         background.threads = 10
>         broker.id = 0
>         broker.id.generation.enable = true
>         broker.rack = null
>         client.quota.callback.class = null
>         compression.type = producer
>         connection.failed.authentication.delay.ms = 100
>         connections.max.idle.ms = 600000
>         connections.max.reauth.ms = 0
>         control.plane.listener.name = null
>         controlled.shutdown.enable = true
>         controlled.shutdown.max.retries = 3
>         controlled.shutdown.retry.backoff.ms = 5000
>         controller.socket.timeout.ms = 30000
>         create.topic.policy.class.name = null
>         default.replication.factor = 1
>         delegation.token.expiry.check.interval.ms = 3600000
>         delegation.token.expiry.time.ms = 86400000
>         delegation.token.master.key = null
>         delegation.token.max.lifetime.ms = 604800000
>         delete.records.purgatory.purge.interval.requests = 1
>         delete.topic.enable = true
>         fetch.max.bytes = 57671680
>         fetch.purgatory.purge.interval.requests = 1000
>         group.initial.rebalance.delay.ms = 0
>         group.max.session.timeout.ms = 1800000
>         group.max.size = 2147483647
>         group.min.session.timeout.ms = 6000
>         host.name =
>         inter.broker.listener.name = null
>         inter.broker.protocol.version = 2.5-IV0
>         kafka.metrics.polling.interval.secs = 10
>         kafka.metrics.reporters = []
>         leader.imbalance.check.interval.seconds = 300
>         leader.imbalance.per.broker.percentage = 10
>         listener.security.protocol.map =
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
>         listeners = null
>         log.cleaner.backoff.ms = 15000
>         log.cleaner.dedupe.buffer.size = 134217728
>         log.cleaner.delete.retention.ms = 86400000
>         log.cleaner.enable = true
>         log.cleaner.io.buffer.load.factor = 0.9
>         log.cleaner.io.buffer.size = 524288
>         log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>         log.cleaner.max.compaction.lag.ms = 9223372036854775807
>         log.cleaner.min.cleanable.ratio = 0.5
>         log.cleaner.min.compaction.lag.ms = 0
>         log.cleaner.threads = 1
>         log.cleanup.policy = [delete]
>         log.dir = /tmp/kafka-logs
>         log.dirs = /tmp/kafka-logs
>         log.flush.interval.messages = 9223372036854775807
>         log.flush.interval.ms = null
>         log.flush.offset.checkpoint.interval.ms = 60000
>         log.flush.scheduler.interval.ms = 9223372036854775807
>         log.flush.start.offset.checkpoint.interval.ms = 60000
>         log.index.interval.bytes = 4096
>         log.index.size.max.bytes = 10485760
>         log.message.downconversion.enable = true
>         log.message.format.version = 2.5-IV0
>         log.message.timestamp.difference.max.ms = 9223372036854775807
>         log.message.timestamp.type = CreateTime
>         log.preallocate = false
>         log.retention.bytes = -1
>         log.retention.check.interval.ms = 300000
>         log.retention.hours = 168
>         log.retention.minutes = null
>         log.retention.ms = null
>         log.roll.hours = 168
>         log.roll.jitter.hours = 0
>         log.roll.jitter.ms = null
>         log.roll.ms = null
>         log.segment.bytes = 1073741824
>         log.segment.delete.delay.ms = 60000
>         max.connections = 2147483647
>         max.connections.per.ip = 2147483647
>         max.connections.per.ip.overrides =
>         max.incremental.fetch.session.cache.slots = 1000
>         message.max.bytes = 1048588
>         metric.reporters = []
>         metrics.num.samples = 2
>         metrics.recording.level = INFO
>         metrics.sample.window.ms = 30000
>         min.insync.replicas = 1
>         num.io.threads = 8
>         num.network.threads = 3
>         num.partitions = 1
>         num.recovery.threads.per.data.dir = 1
>         num.replica.alter.log.dirs.threads = null
>         num.replica.fetchers = 1
>         offset.metadata.max.bytes = 4096
>         offsets.commit.required.acks = -1
>         offsets.commit.timeout.ms = 5000
>         offsets.load.buffer.size = 5242880
>         offsets.retention.check.interval.ms = 600000
>         offsets.retention.minutes = 10080
>         offsets.topic.compression.codec = 0
>         offsets.topic.num.partitions = 50
>         offsets.topic.replication.factor = 1
>         offsets.topic.segment.bytes = 104857600
>         password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
>         password.encoder.iterations = 4096
>         password.encoder.key.length = 128
>         password.encoder.keyfactory.algorithm = null
>         password.encoder.old.secret = null
>         password.encoder.secret = null
>         port = 9092
>         principal.builder.class = null
>         producer.purgatory.purge.interval.requests = 1000
>         queued.max.request.bytes = -1
>         queued.max.requests = 500
>         quota.consumer.default = 9223372036854775807
>         quota.producer.default = 9223372036854775807
>         quota.window.num = 11
>         quota.window.size.seconds = 1
>         replica.fetch.backoff.ms = 1000
>         replica.fetch.max.bytes = 1048576
>         replica.fetch.min.bytes = 1
>         replica.fetch.response.max.bytes = 10485760
>         replica.fetch.wait.max.ms = 500
>         replica.high.watermark.checkpoint.interval.ms = 5000
>         replica.lag.time.max.ms = 30000
>         replica.selector.class = null
>         replica.socket.receive.buffer.bytes = 65536
>         replica.socket.timeout.ms = 30000
>         replication.quota.window.num = 11
>         replication.quota.window.size.seconds = 1
>         request.timeout.ms = 30000
>         reserved.broker.max.id = 1000
>         sasl.client.callback.handler.class = null
>         sasl.enabled.mechanisms = [GSSAPI]
>         sasl.jaas.config = null
>         sasl.kerberos.kinit.cmd = /usr/bin/kinit
>         sasl.kerberos.min.time.before.relogin = 60000
>         sasl.kerberos.principal.to.local.rules = [DEFAULT]
>         sasl.kerberos.service.name = null
>         sasl.kerberos.ticket.renew.jitter = 0.05
>         sasl.kerberos.ticket.renew.window.factor = 0.8
>         sasl.login.callback.handler.class = null
>         sasl.login.class = null
>         sasl.login.refresh.buffer.seconds = 300
>         sasl.login.refresh.min.period.seconds = 60
>         sasl.login.refresh.window.factor = 0.8
>         sasl.login.refresh.window.jitter = 0.05
>         sasl.mechanism.inter.broker.protocol = GSSAPI
>         sasl.server.callback.handler.class = null
>         security.inter.broker.protocol = PLAINTEXT
>         security.providers = null
>         socket.receive.buffer.bytes = 102400
>         socket.request.max.bytes = 104857600
>         socket.send.buffer.bytes = 102400
>         ssl.cipher.suites = []
>         ssl.client.auth = none
>         ssl.enabled.protocols = [TLSv1.2]
>         ssl.endpoint.identification.algorithm = https
>         ssl.key.password = null
>         ssl.keymanager.algorithm = SunX509
>         ssl.keystore.location = null
>         ssl.keystore.password = null
>         ssl.keystore.type = JKS
>         ssl.principal.mapping.rules = DEFAULT
>         ssl.protocol = TLSv1.2
>         ssl.provider = null
>         ssl.secure.random.implementation = null
>         ssl.trustmanager.algorithm = PKIX
>         ssl.truststore.location = null
>         ssl.truststore.password = null
>         ssl.truststore.type = JKS
>         transaction.abort.timed.out.transaction.cleanup.interval.ms =
> 10000
>         transaction.max.timeout.ms = 900000
>         transaction.remove.expired.transaction.cleanup.interval.ms =
> 3600000
>         transaction.state.log.load.buffer.size = 5242880
>         transaction.state.log.min.isr = 1
>         transaction.state.log.num.partitions = 50
>         transaction.state.log.replication.factor = 1
>         transaction.state.log.segment.bytes = 104857600
>         transactional.id.expiration.ms = 604800000
>         unclean.leader.election.enable = false
>         zookeeper.clientCnxnSocket = null
>         zookeeper.connect = localhost:2181
>         zookeeper.connection.timeout.ms = 18000
>         zookeeper.max.in.flight.requests = 10
>         zookeeper.session.timeout.ms = 18000
>         zookeeper.set.acl = false
>         zookeeper.ssl.cipher.suites = null
>         zookeeper.ssl.client.enable = false
>         zookeeper.ssl.crl.enable = false
>         zookeeper.ssl.enabled.protocols = null
>         zookeeper.ssl.endpoint.identification.algorithm = HTTPS
>         zookeeper.ssl.keystore.location = null
>         zookeeper.ssl.keystore.password = null
>         zookeeper.ssl.keystore.type = null
>         zookeeper.ssl.ocsp.enable = false
>         zookeeper.ssl.protocol = TLSv1.2
>         zookeeper.ssl.truststore.location = null
>         zookeeper.ssl.truststore.password = null
>         zookeeper.ssl.truststore.type = null
>         zookeeper.sync.time.ms = 2000
>  (kafka.server.KafkaConfig)
> [2020-04-20 18:25:29,075] INFO [ThrottledChannelReaper-Fetch]: Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> [2020-04-20 18:25:29,085] INFO [ThrottledChannelReaper-Produce]: Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> [2020-04-20 18:25:29,092] INFO [ThrottledChannelReaper-Request]: Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> [2020-04-20 18:25:29,128] DEBUG Reading reply sessionid:0x10868ec81a70001,
> packet:: clientPath:/brokers/topics serverPath:/brokers/topics
> finished:false heade
>  r:: 17,12  replyHeader:: 17,38,0  request:: '/brokers/topics,F  response::
> v{},s{6,6,1587399640743,1587399640743,0,0,0,0,0,0,6}
> (org.apache.zookeeper.ClientC
>        nxn)
> [2020-04-20 18:25:29,197] INFO Loading logs. (kafka.log.LogManager)
> [2020-04-20 18:25:29,257] INFO Logs loading complete in 59 ms.
> (kafka.log.LogManager)
> [2020-04-20 18:25:29,352] INFO Starting log cleanup with a period of
> 300000 ms. (kafka.log.LogManager)
> [2020-04-20 18:25:29,375] INFO Starting log flusher with a default period
> of 9223372036854775807 ms. (kafka.log.LogManager) Killed $
>
> Kind regards,
> Sebastian.
>
> -----Ursprüngliche Nachricht-----
> Von: Lisheng Wang <wanglishen...@gmail.com>
> Gesendet: Montag, 20. April 2020 17:56
> An: users@kafka.apache.org
> Betreff: Re: Help with setting up Kafka Node
>
> Seems zk's log is normal,  the KeeperException dose not matter when you
> start a fresh cluster. Could you post your zk's config? Was there any error
> / exception in kafka's log? if not, you can change your log level to debug
> to find if there is something wrong?
>
> Best,
> Lisheng
>
>
> <i...@fluent-software.de> 于2020年4月20日周一 下午11:19写道:
>
> > Hi,
> >
> >
> >
> > I have a problem with an kafka installation and need some help to go
> > ahead with my issue. Maybe someone can help.
> >
> > If I start kafka, the process is being killed during the startup
> > phrase after some seconds:
> >
> >
> >
> > [2020-04-20 17:12:26,244] INFO [ThrottledChannelReaper-Fetch]:
> > Starting
> > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >
> > [2020-04-20 17:12:26,248] INFO [ThrottledChannelReaper-Produce]:
> > Starting
> > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >
> > [2020-04-20 17:12:26,255] INFO [ThrottledChannelReaper-Request]:
> > Starting
> > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >
> > [2020-04-20 17:12:26,350] INFO Loading logs. (kafka.log.LogManager)
> >
> > [2020-04-20 17:12:26,369] INFO Logs loading complete in 19 ms.
> > (kafka.log.LogManager)
> >
> > [2020-04-20 17:12:26,407] INFO Starting log cleanup with a period of
> > 300000 ms. (kafka.log.LogManager)
> >
> > [2020-04-20 17:12:26,412] INFO Starting log flusher with a default
> > period of 9223372036854775807 ms. (kafka.log.LogManager)
> >
> > Killed
> >
> >
> >
> > Zookeeper starts successfully, and then disconnect the session:
> >
> >
> >
> > [2020-04-20 17:11:58,022] INFO Server
> > environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:
> > /lib64:/lib:/usr/lib
> > (org.apache.zookeeper.server.ZooKeep
> >                          erServer)
> >
> > [2020-04-20 17:11:58,022] INFO Server environment:java.io.tmpdir=/tmp
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,022] INFO Server environment:java.compiler=<NA>
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,023] INFO Server environment:os.name=Linux
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,023] INFO Server environment:os.arch=amd64
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,024] INFO Server
> > environment:os.version=4.15.18-11-pve
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,024] INFO Server environment:user.name=kafka
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,024] INFO Server
> > environment:user.home=/home/kafka
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,024] INFO Server environment:user.dir=/tmp
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,052] INFO tickTime set to 3000
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,052] INFO minSessionTimeout set to -1
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,052] INFO maxSessionTimeout set to -1
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:11:58,085] INFO Using
> > org.apache.zookeeper.server.NIOServerCnxnFactory as server connection
> > factory (org.apache.zookeeper.server.ServerCnxnFactory)
> >
> > [2020-04-20 17:11:58,110] INFO binding to port 0.0.0.0/0.0.0.0:2181
> > (org.apache.zookeeper.server.NIOServerCnxnFactory)
> >
> > [2020-04-20 17:12:24,811] INFO Accepted socket connection from
> > /0:0:0:0:0:0:0:1:47236
> > (org.apache.zookeeper.server.NIOServerCnxnFactory)
> >
> > [2020-04-20 17:12:24,835] INFO Client attempting to establish new
> > session at /0:0:0:0:0:0:0:1:47236
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:12:24,840] INFO Creating new log file: log.28
> > (org.apache.zookeeper.server.persistence.FileTxnLog)
> >
> > [2020-04-20 17:12:24,897] INFO Established session 0x10868af2ada0000
> > with negotiated timeout 6000 for client /0:0:0:0:0:0:0:1:47236
> > (org.apache.zookeeper.server.ZooK
> >                        eeperServer)
> >
> > [2020-04-20 17:12:25,062] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x1 zxid:0x29
> > txntype:-1 reqpath:n/a Error
> >                   Path:/consumers Error:KeeperErrorCode = NodeExists
> > for /consumers (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,120] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x2 zxid:0x2a
> > txntype:-1 reqpath:n/a Error
> >                   Path:/brokers/ids Error:KeeperErrorCode = NodeExists
> > for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,130] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x3 zxid:0x2b
> > txntype:-1 reqpath:n/a Error
> >                   Path:/brokers/topics Error:KeeperErrorCode =
> > NodeExists for /brokers/topics
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,141] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x4 zxid:0x2c
> > txntype:-1 reqpath:n/a Error
> >                   Path:/config/changes Error:KeeperErrorCode =
> > NodeExists for /config/changes
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,152] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x5 zxid:0x2d
> > txntype:-1 reqpath:n/a Error
> >                   Path:/admin/delete_topics Error:KeeperErrorCode =
> > NodeExists for /admin/delete_topics
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,163] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x6 zxid:0x2e
> > txntype:-1 reqpath:n/a Error
> >                   Path:/brokers/seqid Error:KeeperErrorCode =
> > NodeExists for /brokers/seqid
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,175] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x7 zxid:0x2f
> > txntype:-1 reqpath:n/a Error
> >                   Path:/isr_change_notification Error:KeeperErrorCode
> > = NodeExists for /isr_change_notification
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,187] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x8 zxid:0x30
> > txntype:-1 reqpath:n/a Error
> >                   Path:/latest_producer_id_block Error:KeeperErrorCode
> > = NodeExists for /latest_producer_id_block
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,198] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0x9 zxid:0x31
> > txntype:-1 reqpath:n/a Error
> >                   Path:/log_dir_event_notification
> > Error:KeeperErrorCode = NodeExists for /log_dir_event_notification
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,208] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0xa zxid:0x32
> > txntype:-1 reqpath:n/a Error
> >                   Path:/config/topics Error:KeeperErrorCode =
> > NodeExists for /config/topics
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,218] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0xb zxid:0x33
> > txntype:-1 reqpath:n/a Error
> >                   Path:/config/clients Error:KeeperErrorCode =
> > NodeExists for /config/clients
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,228] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0xc zxid:0x34
> > txntype:-1 reqpath:n/a Error
> >                   Path:/config/users Error:KeeperErrorCode =
> > NodeExists for /config/users
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:25,237] INFO Got user-level KeeperException when
> > processing sessionid:0x10868af2ada0000 type:create cxid:0xd zxid:0x35
> > txntype:-1 reqpath:n/a Error
> >                   Path:/config/brokers Error:KeeperErrorCode =
> > NodeExists for /config/brokers
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> > [2020-04-20 17:12:26,937] WARN Unable to read additional data from
> > client sessionid 0x10868af2ada0000, likely client has closed socket
> > (org.apache.zookeeper.server.N
> >                    IOServerCnxn)
> >
> > [2020-04-20 17:12:26,941] INFO Closed socket connection for client
> > /0:0:0:0:0:0:0:1:47236 which had sessionid 0x10868af2ada0000
> > (org.apache.zookeeper.server.NIOServe
> >                            rCnxn)
> >
> > [2020-04-20 17:12:34,615] INFO Expiring session 0x10868af2ada0000,
> > timeout of 6000ms exceeded
> > (org.apache.zookeeper.server.ZooKeeperServer)
> >
> > [2020-04-20 17:12:34,616] INFO Processed session termination for
> > sessionid: 0x10868af2ada0000
> > (org.apache.zookeeper.server.PrepRequestProcessor)
> >
> >
> >
> >
> >
> > Any ideas about that?
> >
> >
> >
> > Kind regards,
> >
> > Sebastian
> >
> >
> >
> >
>
>
>

Reply via email to