Hey everyone,

I am having some issues when running Kafka Connect inside of Docker, so
would really appreciate some feedback on what I'm doing wrong.

I'm able to run locally (by executing `connect-distributed
config.properties`. However, when running in Docker and passing the same
configuration as environment variables, after Kafka Connect has started and
starts assigning tasks to workers, the container just exits without any
error.

Local setup:
 - I have Kafka installed locally, so I'm using the `connect-distributed`
script along with my properties file, which are listed below. This seems to
work without issues

# Kafka broker IP addresses to connect to
bootstrap.servers=kafka:9092
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.truststore.location=truststore.jks
ssl.truststore.password=some-password
ssl.protocol=TLS
security.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
    username="kafka_username" \
    password="kafka_password";
consumer.ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
consumer.ssl.truststore.location=truststore.jks
consumer.ssl.truststore.password=some-password
consumer.ssl.protocol=TLS
consumer.security.protocol=SASL_SSL
consumer.ssl.endpoint.identification.algorithm=
consumer.sasl.mechanism=SCRAM-SHA-256
# CONFIGURE USER AND PASSWORD!
consumer.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \

    username="kafka_username" \
    password="kafka_password";


group.id=my-group
config.storage.topic=kconnect.config
offset.storage.topic=kconnect.offsets
status.storage.topic=kconnect.status


# Path to directory containing the connect jar and dependencies
plugin.path=path/to/my/plugins/
# Converters to use to convert keys and values
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter

# The internal converters Kafka Connect uses for storing offset and
configuration data
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/kconnect.offsets
rest.port=XXXX


Dockerized setup:
 - I'm using the Confluent Kafka Connect Base Image (I've also tried with
the "fat" Confluent image), to which I'm passing my configuration as
environment variables when I run the container.
Dockerfile:

FROM confluentinc/cp-kafka-connect-base

COPY file-sink.txt /output/file-sink.txt

COPY truststore.jks /trustores/


Docker run with environs:

docker run --rm --name kconnect -p 0.0.0.0:XXXX:XXXX \
    -e CONNECT_BOOTSTRAP_SERVERS=kafka:9092 \
    -e CONNECT_SSL_PROTOCOL=TLS \
    -e CONNECT_ENABLED_PROTOCOLS=TLSv1.2,TLSv1.1,TLSv1 \
    -e CONNECT_SSL_TRUSTSTORE_LOCATION=/truststores/truststore.jks \
    -e CONNECT_SSL_TRUSTSTORE_PASSWORD=instaclustr \
    -e CONNECT_SECURITY_PROTOCOL=SASL_SSL \
    -e 
CONNECT_SASL_JAAS_CONFIG='org.apache.kafka.common.security.scram.ScramLoginModule
required username="KAFKA_USERNAME" password="KAFKA_PASSWORD";' \
    -e CONNECT_SASL_MECHANISM=SCRAM-SHA-256 \
    -e CONNECT_GROUP_ID=my-group \
    -e CONNECT_CONFIG_STORAGE_TOPIC="kconnect.config" \
    -e CONNECT_OFFSET_STORAGE_TOPIC="kconnect.offsets" \
    -e CONNECT_STATUS_STORAGE_TOPIC="kconnect.status" \
    -e 
CONNECT_KEY_CONVERTER="org.apache.kafka.connect.converters.ByteArrayConverter"
\
    -e 
CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.converters.ByteArrayConverter"
\
    -e 
CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
\
    -e 
CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
\
    -e CONNECT_REST_ADVERTISED_HOST_NAME=localhost \
    -e CONNECT_REST_PORT=XXXX:XXXX \
    -e CONNECT_PLUGIN_PATH='path/to/my/plugins/' \
    kconnect-image


In both cases, after the Kafka Connect Server has started, I send the same
request to both:

http://localhost:XXXX/connectors \
-H "Content-Type: application/json" -H "Accept: application/json" -d @- \
 << EOF
{
"name": "simple-file-sink-connector",
  "config": {
  "connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
  "file": "./output/file-sink.txt",
  "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
  "tasks.max": 3,
  "topics": "my-topic",
  "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter"
  }
}
EOF


In this case I'm using the already built-in FileStreamSinkConnector  to
eliminate another moving part. There are no issues regarding the existence
of the file-sink.txt file, if I have made a mistake somewhere here in my
example.
Without registering any connectors the Kconnect servers behave exactly the
same. However, when I send the above-mentioned request to local, everything
works as expected but when I send the request to the Kconnect in Docker,
after it has assigned the tasks it just dies without any logs. The exit
code from the docker container I got by running `echo $?` is 137.

I'm really puzzled by this behaviour. I hope I have provided enough
information above, but if I haven't, please let me know.

Thanks a lot for reading this wall of text and for your help.

All the best!
Aleksandar Irikov

Reply via email to