Hi Vishnu, I wrote an implementation of org.apache.kafka.connect.storage.Converter, included it in the KC worker classpath (then set it with the property value.converter) to provide the schema that the JDBC sink needs.
That approach may work for 1). For 2) KC can use upsert if your DB supports it, based on the PK you configure. But I've found in the past that it's not possible to reference values already in the DB, so if key X had count = 5 in the DB already, and the JDBC sink had a record with key X and count = 10, then it'll overwrite instead of accumulating, so after the update count in the DB will be 10, not 15. Kind regards, Liam Clarke-Hutchinson On Thu, 7 May 2020, 5:48 pm vishnu murali, <vishnumurali9...@gmail.com> wrote: > Hey Guys, > > i am working on JDBC Sink Conneector to take data from kafka topic to > mysql. > > i am having 2 questions. > > i am using normal Apache Kafka 2.5 not a confluent version. > > 1)For inserting data every time we need to add the schema data also with > every data,How can i overcome this situation?i want to give only the data. > > 2)In certain time i need to update the existing record without adding as a > new record.How can i achieve this? >