Hey, We are trying to activate the sink JDBC connector. I'm trying to connect one topic from kafka to RedShift, but I'm keep getting the following error: ERROR Task test-redshift-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:449) org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: kafka_test
Did anybody have an idea? Thanks a lot! these are my properties files: 1. registry-schema properties file: ========================= bootstrap.servers=kafka10test:9092 key.converter=org.apache.kafka.connect.storage.StringConverter value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://kafka10test:8081 rest.port=8090 # The internal converter used for offsets and config data is configurable and must be specified, # but most users will always want to use the built-in default. Offset and config data is never # visible outside of Connect in this format. internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false # Local storage file for offset data offset.storage.file.filename=/tmp/connect.offsets ========================= 2. sink-connector properties file: ========================= name=test-redshift-sink connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=1 # The topics to consume from - required for sink connectors like this one topics=cdc_system value.converter.schema.registry.url=http://kafka10test:8081 # Configuration specific to the JDBC sink connector. # We want to connect to a SQLite database stored in the file test.db and autqo-create tables. connection.url=jdbc:postgresql://HOST connection.user=USER connection.password=PASS key.converter=org.apache.kafka.connect.storage.StringConverter value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://kafka10test:8081 enhanced.avro.schema.support=true # Create a table if not exists. auto.create=false auto.evolve=false # Insert or Upsert the data (Only when possible) insert.mode=insert # Table name to insert data into table.name.format=kafka_test # batch size of write. batch.size=100 fields.whitelist="" # Primary Key mode pk.mode=none =========================