Praveen created KAFKA-8849:
------------------------------

             Summary: Failing to connect to kafka due to listener configuration 
issue. Getting error `Topic testing not present in metadata after 60000`
                 Key: KAFKA-8849
                 URL: https://issues.apache.org/jira/browse/KAFKA-8849
             Project: Kafka
          Issue Type: Bug
          Components: network
    Affects Versions: 2.1.1
         Environment: ubuntu
            Reporter: Praveen


Failing to connect to kafka due to listeners configuration issues. Getting 
error {{Topic testing not present in metadata after 60000}} [on hold]
I Started facing this problem in my test automation script. I am creating the 
setup automatically on docker. I was able to figure out that this problem is 
happening due to advertised.listeners and listeners configuration.

When set up is created with basic config manually, it seems to work fine.

If the config is changed to some other value, which fails, it is not starting 
even after reverting the changes back to config that worked. In this case 
docker container has to be deleted and recreated with fresh kafka installation.

Creating setup using automation is still a blocker because creating setup with 
proper configuration (configuration that works when created manually), is also 
not working at this point.

Note: I dont really need to configure {{advertised.listeners}} because the 
kafka is being accessed from same subnet.

Kafka Version : 2.12-2.0.0

This is the configuration that worked

```
  # {{contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=146

############################# Socket Server Settings 
#############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9093

# Hostname and port the broker will advertise to producers and consumers. If 
not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the 
value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://d5019cfc7df0:9093

# Maps listener names to security protocols, the default is for them to be the 
same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,EXTERNAL:SSL,INTERNAL:SSL

# The number of threads that the server uses for receiving requests from the 
network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may 
include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection 
against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs_46

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at 
startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs 
located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  
#############################
# The replication factor for the group metadata internal topics 
"__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is 
recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only 
fsync() to sync
# the OS cache lazily. The following configurations control the flush of data 
to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the 
flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a 
small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data 
after a period of time or
# every N messages (or both). This can be done globally and overridden on a 
per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy 
can
# be set to delete segments after a period of time, or after a given size has 
accumulated.
# A segment will be deleted whenever *either* of these criteria are met. 
Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log 
unless the remaining
# segments drop below log.retention.bytes. Functions independently of 
log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log 
segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted 
according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=afda0da8220c:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings 
#############################

# The following configuration specifies the time, in milliseconds, that the 
GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of 
group.initial.rebalance.delay.ms as new members join the group, up to a maximum 
of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience 
for development and testing.
# However, in production environments the default value of 3 seconds is more 
suitable as this will help to avoid unnecessary, and potentially expensive, 
rebalances during application startup.
group.initial.rebalance.delay.ms=0
home=/opt/kafka
port=9093
#listeners=PLAINTEXT://:9093
ssl.key.password=test1234
ssl.keystore.type=JKS
ssl.truststore.password=test1234
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.password=test1234
version=2.12-2.1.1
ssl.keystore.location=/var/private/ssl/server.keystore.jks
ssl.truststore.location=/var/private/ssl/server.truststore.jks
ssl.endpoint.identification.algorithm=
ssl.truststore.type=JKS
ssl.client.auth=none```}}


{{This is the one that failed.}}

{{```}} {{# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1000

############################# Socket Server Settings 
#############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:29092

# Hostname and port the broker will advertise to producers and consumers. If 
not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the 
value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the 
same. See the config documentation for more details
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,EXTERNAL:SSL,INTERNAL:SSL

# The number of threads that the server uses for receiving requests from the 
network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may 
include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection 
against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at 
startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs 
located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  
#############################
# The replication factor for the group metadata internal topics 
"__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is 
recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only 
fsync() to sync
# the OS cache lazily. The following configurations control the flush of data 
to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the 
flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a 
small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data 
after a period of time or
# every N messages (or both). This can be done globally and overridden on a 
per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy 
can
# be set to delete segments after a period of time, or after a given size has 
accumulated.
# A segment will be deleted whenever *either* of these criteria are met. 
Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log 
unless the remaining
# segments drop below log.retention.bytes. Functions independently of 
log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log 
segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted 
according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=79652d480025:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings 
#############################

# The following configuration specifies the time, in milliseconds, that the 
GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of 
group.initial.rebalance.delay.ms as new members join the group, up to a maximum 
of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience 
for development and testing.
# However, in production environments the default value of 3 seconds is more 
suitable as this will help to avoid unnecessary, and potentially expensive, 
rebalances during application startup.
group.initial.rebalance.delay.ms=0
home=/opt/kafka
port=29092
ssl.key.password=test1234
ssl.keystore.type=JKS
ssl.truststore.password=test1234
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.password=test1234
version=2.12-2.1.1
ssl.keystore.location=/var/private/ssl/server.keystore.jks
ssl.truststore.location=/var/private/ssl/server.truststore.jks
ssl.endpoint.identification.algorithm=
ssl.truststore.type=JKS
ssl.client.auth=none}}{{```}}

{{```}} {{Failed to send message from 
producerjava.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
newtopic-0:314130 ms has passed since batch 
creation;[Ljava.lang.StackTraceElement;@4e50445"}}{{```}}


{{```2019-08-29 07:22:36,584] WARN [SocketServer brokerId=111] Unexpected error 
from /172.22.0.3; closing connection 
(org.apache.kafka.common.network.Selector)2019-08-29 07:22:36,584] WARN 
[SocketServer brokerId=111] Unexpected error from /172.22.0.3; closing 
connection 
(org.apache.kafka.common.network.Selector)org.apache.kafka.common.network.InvalidReceiveException:
 Invalid receive (size = 369296128 larger than 104857600) at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:335) 
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:296) at 
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:560) at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:496) 
at org.apache.kafka.common.network.Selector.poll(Selector.java:425) at 
kafka.network.Processor.poll(SocketServer.scala:678) at 
kafka.network.Processor.run(SocketServer.scala:583) at 
java.lang.Thread.run(Thread.java:748)```}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to