Re: Flink 1.20 and Flink Kafka Connector 3.2.0-1.19

2024-10-03 Thread Dominik.Buenzli
Data, Analytics & AI Engineer III From: patricia lee Date: Thursday, 3 October 2024 at 15:27 To: user@flink.apache.org Subject: Flink 1.20 and Flink Kafka Connector 3.2.0-1.19 Be aware: This is an external email. Hi, I have upgraded our project to Flink 1.20 and JDK 17. But I noticed ther

Flink 1.20 and Flink Kafka Connector 3.2.0-1.19

2024-10-03 Thread patricia lee
Hi, I have upgraded our project to Flink 1.20 and JDK 17. But I noticed there is no Kafka connector for Flink 1.20. I currently used the versions but there is intermittent error of Kafka related No Class Definition Error Where can I get the Kafka connector for flink 1.20? Thanks

Re: Re: Re: Flink kafka connector for v 1.19.0

2024-05-17 Thread Niklas Wilcke
t; Best Regards >>> Ahmed Hamdy >>> >>> >>> On Fri, 10 May 2024 at 18:14, Aniket Sule >> <mailto:aniket.s...@netwitness.com>> wrote: >>>> Hello, >>>> >>>> On the Flink downloads page, the latest stable

Re: [EXTERNAL]Re: Flink kafka connector for v 1.19.0

2024-05-16 Thread Hang Ruan
et Sule > wrote: > >> Hello, >> >> On the Flink downloads page, the latest stable version is Flink 1.19.0. >> However, the Flink Kafka connector is v 3.1.0, that is compatible with >> 1.18.x. >> >> Is there a timeline when the Kafka connector fo

Re: [EXTERNAL]Re: Flink kafka connector for v 1.19.0

2024-05-16 Thread Niklas Wilcke
e: >> Hello, >> >> On the Flink downloads page, the latest stable version is Flink 1.19.0. >> However, the Flink Kafka connector is v 3.1.0, that is compatible with >> 1.18.x. >> >> Is there a timeline when the Kafka connector for v 1.19 will be re

Re: Flink kafka connector for v 1.19.0

2024-05-10 Thread Ahmed Hamdy
at 18:14, Aniket Sule wrote: > Hello, > > On the Flink downloads page, the latest stable version is Flink 1.19.0. > However, the Flink Kafka connector is v 3.1.0, that is compatible with > 1.18.x. > > Is there a timeline when the Kafka connector for v 1.19 will be released? &g

Flink kafka connector for v 1.19.0

2024-05-10 Thread Aniket Sule
Hello, On the Flink downloads page, the latest stable version is Flink 1.19.0. However, the Flink Kafka connector is v 3.1.0, that is compatible with 1.18.x. Is there a timeline when the Kafka connector for v 1.19 will be released? Is it possible to use the v3.1.0 connector with Flink v 1.19

Re: Utilizing Kafka headers in Flink Kafka connector

2022-10-13 Thread Shengkai Fang
hi. You can use SQL API to parse or write the header in the Kafka record[1] if you are using Flink SQL. Best, Shengkai [1] https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka/#available-metadata Yaroslav Tkachenko 于2022年10月13日周四 02:21写道: > Hi, > > You can implemen

Re: Utilizing Kafka headers in Flink Kafka connector

2022-10-12 Thread Yaroslav Tkachenko
Hi, You can implement a custom KafkaRecordDeserializationSchema (example https://docs.immerok.cloud/docs/cookbook/reading-apache-kafka-headers-with-apache-flink/#the-custom-deserializer) and just avoid emitting the record if the header value matches what you need. On Wed, Oct 12, 2022 at 11:04 AM

Utilizing Kafka headers in Flink Kafka connector

2022-10-12 Thread Great Info
I have some flink applications that read streams from Kafka, now the producer side code has introduced some additional information in Kafka headers while producing records. Now I need to change my consumer-side logic to process the records if the header contains a specific value, if the header valu

Re: (Possible) bug in flink-kafka-connector (metrics rewriting)

2022-08-02 Thread zhanghao.chen
essing. issues.apache.org  Best, Zhanghao Chen From: Valentina Predtechenskaya Sent: Wednesday, August 3, 2022 1:32 To: user@flink.apache.org Subject: (Possible) bug in flink-kafka-connector (metrics rewriting) Hello ! I would like to report a bug with m

(Possible) bug in flink-kafka-connector (metrics rewriting)

2022-08-02 Thread Valentina Predtechenskaya
/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaWriter.java#L359-L360 I have debugged these libraries a lot and I'm sure in that behavior. If, for example, patch flink-kafka-connector with condition not to initialize metr

Re: Apache Flink Kafka Connector not found Error

2021-07-19 Thread Taimoor Bhatti
2021 5:23 PM To: Taimoor Bhatti ; user@flink.apache.org Subject: Re: Apache Flink Kafka Connector not found Error Hi Taimoor, It seems sometime IntelliJ does not works well for index, perhaps you could choose mvn -> reimport project from the context menu, if it still not work, perhaps you mi

Re: Apache Flink Kafka Connector not found Error

2021-07-19 Thread Yun Gao
Yun -- From:Taimoor Bhatti Send Time:2021 Jul. 19 (Mon.) 23:03 To:user@flink.apache.org ; Yun Gao Subject:Re: Apache Flink Kafka Connector not found Error Hello Yun, Many thanks for the reply... For some reason I'm not able t

Re: Apache Flink Kafka Connector not found Error

2021-07-19 Thread Taimoor Bhatti
mport org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer "mvn clean compile" works however... (Thanks...) Do you know why IntelliJ doesn't see the import?? Best, Taimoor From: Yun Gao Sent: Monday, July 19, 2021 3:25 PM To: Taimoor Bhatti ; user@flink.apache.org Subject: Re: Apache F

Re: Apache Flink Kafka Connector not found Error

2021-07-19 Thread Yun Gao
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer Best, Yun -- From:Taimoor Bhatti Send Time:2021 Jul. 19 (Mon.) 18:43 To:user@flink.apache.org Subject:Apache Flink Kafka Connector not found Error I'm having some trouble with

Apache Flink Kafka Connector not found Error

2021-07-19 Thread Taimoor Bhatti
I'm having some trouble with using the Flink DataStream API with the Kafka Connector. There don't seem to be great resources on the internet which can explain the issue I'm having. My project is here: https://github.com/sysarcher/flink-scala-tests I want to I'm unable to use FlinkKafkaConsumer

Re: Flink Kafka connector in Python

2020-07-14 Thread Xingbo Huang
k.table.api.ValidationException: Could not find >>>> the required schema in property 'schema'. >>>>at >>>> org.apache.flink.table.descriptors.SchemaValidator.validate(SchemaValidator.java:90) >>>>at >>>> org.apache.fli

Re: Flink Kafka connector in Python

2020-07-06 Thread Manas Kale
bleSourceFactory.java:49) >>> at >>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:53) >>> ... 13 more >>> >>> >>> During handling of the above exception, another exception occurred: >>&g

Re: Flink Kafka connector in Python

2020-07-03 Thread Manas Kale
link/table/descriptors.py", >> line 1295, in register_table_source >> self._j_connect_table_descriptor.registerTableSource(name) >> File >> "/home/manas/anaconda3/envs/flink_env/lib/python3.7/site-packages/py4j/java_gateway.py", >> line 1286, in __c

Re: Flink Kafka connector in Python

2020-07-02 Thread Xingbo Huang
what's > wrong with the schema. Is the schema not properly declared? Is some field > missing? > > FWIW I have included the JSON and kafka connector JARs in the required > location. > > > Regards, > Manas > > > On Tue, Jun 30, 2020 at 11:58 AM Manas Kale

Re: Flink Kafka connector in Python

2020-07-02 Thread Manas Kale
ead kafka >> data. Besides, You can refer to the common questions about PyFlink[3] >> >> [1] >> https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/create.html#run-a-create-statement >> [2] >> https://ci.apache.org/projects/flink/flink-docs-rele

Re: Flink Kafka connector in Python

2020-06-29 Thread Manas Kale
t; I want to consume and write to Kafak from Flink's python API. >> >> The only way I found to do this was through this >> <https://stackoverflow.com/questions/52744277/apache-flink-kafka-connector-in-python-streaming-api-cannot-load-user-class> >> question &g

Re: Flink Kafka connector in Python

2020-06-29 Thread Xingbo Huang
.html Best, Xingbo Manas Kale 于2020年6月29日周一 下午8:10写道: > Hi, > I want to consume and write to Kafak from Flink's python API. > > The only way I found to do this was through this > <https://stackoverflow.com/questions/52744277/apache-flink-kafka-connector-in-python-streamin

Flink Kafka connector in Python

2020-06-29 Thread Manas Kale
Hi, I want to consume and write to Kafak from Flink's python API. The only way I found to do this was through this <https://stackoverflow.com/questions/52744277/apache-flink-kafka-connector-in-python-streaming-api-cannot-load-user-class> question on SO where the user essentially copies

Re: Flink Kafka Connector Source Parallelism

2020-05-29 Thread Robert Metzger
ld have more parallelism than kafka partitions, > but the extra subtasks will not doing anything. > > > > *From: *"Chen, Mason" > *Date: *Wednesday, May 27, 2020 at 11:09 PM > *To: *"user@flink.apache.org" > *Subject: *Flink Kafka Connector Source Pa

Re: Flink Kafka Connector Source Parallelism

2020-05-27 Thread Chen, Mason
n" Date: Wednesday, May 27, 2020 at 11:09 PM To: "user@flink.apache.org" Subject: Flink Kafka Connector Source Parallelism Hi all, I’m currently trying to understand Flink’s Kafka Connector and how parallelism affects it. So, I am running the flink playground click count job and the

Flink Kafka Connector Source Parallelism

2020-05-27 Thread Chen, Mason
Hi all, I’m currently trying to understand Flink’s Kafka Connector and how parallelism affects it. So, I am running the flink playground click count job and the parallelism is set to 2 by default. However, I don’t see the 2nd subtask of the Kafka Connector sending any records: https://imgur.c

Re:Re: Error handler strategy in Flink Kafka connector with json format

2020-03-08 Thread sunfulin
yep. Glad to see the progress. Best At 2020-03-09 12:44:05, "Jingsong Li" wrote: Hi Sunfulin, I think this is very important too. There is an issue to fix this[1]. Is that meet your requirement? [1] https://issues.apache.org/jira/browse/FLINK-15396 Best, Jingsong Lee On Mon, Mar

Re: Error handler strategy in Flink Kafka connector with json format

2020-03-08 Thread Jingsong Li
Hi Sunfulin, I think this is very important too. There is an issue to fix this[1]. Is that meet your requirement? [1] https://issues.apache.org/jira/browse/FLINK-15396 Best, Jingsong Lee On Mon, Mar 9, 2020 at 12:33 PM sunfulin wrote: > hi , community, > I am wondering if there is some config

Error handler strategy in Flink Kafka connector with json format

2020-03-08 Thread sunfulin
hi , community, I am wondering if there is some config params with error handler strategy as [1] refers when defining a Kafka stream table using Flink SQL DDL. For example, the following `json.parser.failure.strategy' can be set to `silencly skip` that can skip the malformed dirty data proces

Re: Flink Kafka connector consume from a single kafka partition

2020-02-21 Thread Robert Metzger
Hey Hemant, Are you able to reconstruct the ordering of the event, for example based on time or some sequence number? If so, you could create as many Kafka partitions as you need (for proper load distribution), disregarding any ordering at that point. Then you keyBy your stream in Flink, and order

Re: Flink Kafka connector consume from a single kafka partition

2020-02-19 Thread hemant singh
Hi Arvid, Thanks for your response. I think I did not word my question properly. I wanted to confirm that if the data is distributed to more than one partition then the ordering cannot be maintained (which is documented). According to your response I understand if I set the parallelism to number o

Re: Flink Kafka connector consume from a single kafka partition

2020-02-19 Thread Arvid Heise
Hi Hemant, Flink passes your configurations to the Kafka consumer, so you could check if you can subscribe to only one partition there. However, I would discourage that approach. I don't see the benefit to just subscribing to the topic entirely and have dedicated processing for the different devi

Flink Kafka connector consume from a single kafka partition

2020-02-18 Thread hemant singh
Hello Flink Users, I have a use case where I am processing metrics from different type of sources(one source will have multiple devices) and for aggregations as well as build alerts order of messages is important. To maintain customer data segregation I plan to have single topic for each customer

Flink Kafka Connector

2019-09-05 Thread Vishwas Siravara
Hi guys, I am using flink connector for kakfa from 1.9.0 Her is my sbt dependency : "org.apache.flink" %% "flink-connector-kafka" % "1.9.0", When I check the log file I see that the kafka version is 0.10.2.0. According to the docs it says that 1.9.0 onwards the version should be 2.2.0. Why do I

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-06 Thread xwang355
Thanks Fabian. This is really helpful. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-06 Thread Fabian Hueske
Hi Ben, Flink correctly maintains the offsets of all partitions that are read by a Kafka consumer. A checkpoint is only complete when all functions successful checkpoint their state. For a Kafka consumer, this state is the current reading offset. In case of a failure the offsets and the state of a

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-05 Thread xwang355
Elias Thanks for your reply. In this case, *When # of Kafka consumers = # of partitions, and I use setParallelism(>1), something like this 'messageSteam.rebalance().map(lamba).setParallelism(3).print()' * If checkpointing is enabled, I assume Flink will commit the offsets in the 'right order' d

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-05 Thread Elias Levy
There is no such concept in Flink. Flink tracks offsets in its checkpoints. It can optionally commit offsets to Kafka, but that is only for reporting purposes. If you wish to lower the number of records that get reprocessed in the case of a restart, then you must lower the checkpoint interval.

Does Flink Kafka connector has max_pending_offsets concept?

2019-06-04 Thread wang xuchen
Hi Flink users, When # of Kafka consumers = # of partitions, and I use setParallelism(>1), something like this 'messageSteam.rebalance().map(lamba).setParallelism(3).print()' How do I tune # of outstanding uncommitted offset? Something similar to https://storm.apache.org/releases/1.1.2/storm-k

Re: Apache Flink: Kafka connector in Python streaming API, “Cannot load user class”

2018-10-11 Thread Dawid Wysakowicz
Hi Kostas, As far as I know you cannot just use java classes from within python API. I think Python API does not provide wrapper for kafka connector. I am adding Chesnay to cc to correct me if I am wrong. Best, Dawid On 11/10/18 12:18, Kostas Evangelou wrote: > Hey all,  > > Thank you so much

Apache Flink: Kafka connector in Python streaming API, “Cannot load user class”

2018-10-11 Thread Kostas Evangelou
Hey all, Thank you so much for your efforts. I've already posted this question on stack overflow, but thought I should ask here as well. I am trying out Flink's new Python streaming API and attempting to run my script with ./flink-1.6.1/bin/pyflink-stream.sh examples/read_from_kafka.py. The pytho

Re: Flink Kafka connector not exist

2018-04-19 Thread Tzu-Li (Gordon) Tai
/ReadFromKafka.java:[24,51] package org.apache.flink.streaming.connectors.kafka does not exist [ERROR] /env/mvn/test-0.0.1/processing-app/src/main/java/com/test/alpha/ReadFromKafka.java:[52,63] cannot find symbol [ERROR]   symbol:   class FlinkKafkaConsumer011 Here is my "pom.xml" configuration for F

Flink Kafka connector not exist

2018-04-19 Thread Lehuede sebastien
] cannot find symbol* *[ERROR] symbol: class FlinkKafkaConsumer011* Here is my "pom.xml" configuration for Flink Kafka connector : *1.4.2 1.8 2.11 ${java.version} ${java.version}* ** *org.apache.flink* * flink-connector-k

Re: Flink - Kafka Connector

2018-04-13 Thread Alexandru Gutan
You will be able to use it. Kafka 1.10 has backwards compatibility with v1.0, 0.11 and 0.10 connectors as far as I know. On 13 April 2018 at 15:12, Lehuede sebastien wrote: > Hi All, > > I'm very new in Flink (And on Streaming Application topic in general) so > sorry if for my newbie question. >

Flink - Kafka Connector

2018-04-13 Thread Lehuede sebastien
Hi All, I'm very new in Flink (And on Streaming Application topic in general) so sorry if for my newbie question. I plan to do some test with Kafka and Flink and use the Kafka connector for that. I find information on this page : https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/co

Re: Flink kafka connector with JAAS configurations crashed

2018-03-19 Thread Nico Kruber
Hi, I'm no expert on Kafka here, but as the tasks are run on the worker nodes (where the TaskManagers are run), please double-check whether the file under /data/apps/spark/kafka_client_jaas.conf on these nodes also contains the same configuration as on the node running the JobManager, i.e. an appro

Flink kafka connector with JAAS configurations crashed

2018-03-13 Thread sundy
Hi ,all I use the code below to set kafka JASS config, the serverConfig.jasspath is /data/apps/spark/kafka_client_jaas.conf, but on flink standalone deployment, it crashs. I am sure the kafka_client_jass.conf is valid, cause other applications(Spark streaming) are still working fine with

Re: Flink-Kafka connector - partition offsets for a given timestamp?

2017-12-12 Thread Yang, Connie
Thanks, Gordan! Will keep an eye on that! Connie From: "Tzu-Li (Gordon) Tai" Date: Monday, December 11, 2017 at 5:29 PM To: Connie Yang Cc: "user@flink.apache.org" Subject: Re: Flink-Kafka connector - partition offsets for a given timestamp? Hi Connie, We do have a

Re: Flink-Kafka connector - partition offsets for a given timestamp?

2017-12-11 Thread Tzu-Li (Gordon) Tai
this happens for Flink 1.5, for which the proposed release date is around February 2018. Cheers, Gordon On Tue, Dec 12, 2017 at 3:53 AM, Yang, Connie wrote: > Hi, > > > > Does Flink-Kafka connector allow job graph to consume topoics/partitions > from a specific timestamp? >

Flink-Kafka connector - partition offsets for a given timestamp?

2017-12-11 Thread Yang, Connie
Hi, Does Flink-Kafka connector allow job graph to consume topoics/partitions from a specific timestamp? https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java#L469 seems to

Re: Issue Faced: Not able to get the consumer offsets from Kafka when using Flink with Flink-Kafka Connector

2017-02-07 Thread MAHESH KUMAR
-1.2.0 >> Using the latest Kafka Connector - FlinkKafkaConsumer010 - >> flink-connector-kafka-0.10_2.11 >> >> Issue Faced: Not able to get the consumer offsets from Kafka when using >> Flink with Flink-Kafka Connector >> >> $ /work/kafka_2.11-0.10.1.1/bi

Re: Issue Faced: Not able to get the consumer offsets from Kafka when using Flink with Flink-Kafka Connector

2017-02-07 Thread Robert Metzger
offsets from Kafka when using > Flink with Flink-Kafka Connector > > $ /work/kafka_2.11-0.10.1.1/bin/kafka-consumer-groups.sh > --bootstrap-server localhost:9092 --list > console-consumer-19886 > console-consumer-89637 > $ > > It does not show the consumer group &quo

Issue Faced: Not able to get the consumer offsets from Kafka when using Flink with Flink-Kafka Connector

2017-02-07 Thread MAHESH KUMAR
with Flink-Kafka Connector $ /work/kafka_2.11-0.10.1.1/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list console-consumer-19886 console-consumer-89637 $ It does not show the consumer group "test" For a group that does not exist, the message is as follows: $ /work/

Re: Zeppelin: Flink Kafka Connector

2017-01-18 Thread Fabian Hueske
#x27;m using Zeppelin built off master. And Flink 1.2 > > built off the release-1.2 branch. Is that the right branch? > > > > Neil > > > > > > > > -- > > View this message in context: http://apache-flink-user- > mailing-list-archive.2336050.n4.nabble

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
-flink-user-mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-Kafka-Connector-tp3p9.html > Sent from the Apache Flink User Mailing List archive. mailing list archive at > Nabble.com.

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
Hi Timo & Fabian, Thanks for replying. I'm using Zeppelin built off master. And Flink 1.2 built off the release-1.2 branch. Is that the right branch? Neil -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-Kafka-

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Fabian Hueske
fka > specifically? > > > > *From: *Fabian Hueske > *Reply-To: *"user@flink.apache.org" > *Date: *Tuesday, January 17, 2017 at 7:10 AM > *To: *"user@flink.apache.org" > *Subject: *Re: Zeppelin: Flink Kafka Connector > > > >

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Foster, Craig
Are connectors being included in the 1.2.0 release or do you mean Kafka specifically? From: Fabian Hueske Reply-To: "user@flink.apache.org" Date: Tuesday, January 17, 2017 at 7:10 AM To: "user@flink.apache.org" Subject: Re: Zeppelin: Flink Kafka Connector One thing to

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Fabian Hueske
One thing to add: Flink 1.2.0 has not been release yet. The FlinkKafkaConsumer010 is only available in a SNAPSHOT release or the first release candidate (RC0). Best, Fabian 2017-01-17 16:08 GMT+01:00 Timo Walther : > You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010 > was n

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Timo Walther
You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010 was not present at that time. You need to upgrade to Flink 1.2. Timo Am 17/01/17 um 15:58 schrieb Neil Derraugh: This is really a Zeppelin question, and I’ve already posted to the user list there. I’m just trying to dra

Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
This is really a Zeppelin question, and I’ve already posted to the user list there. I’m just trying to draw in as many relevant eyeballs as possible. If you can help please reply on the Zeppelin mailing list. In my Zeppelin notebook I’m having a problem importing the Kafka streaming library f

Re: Flink Kafka Connector behaviour on leader change and broker upgrade

2016-11-06 Thread Robert Metzger
Hi, yes, the Flink Kafka connector for Kafka 0.8 handles broker leader changes without failing. The SimpleConsumer provided by Kafka 0.8 doesn't handle that. The 0.9 Flink Kafka consumer also supports broker leader changes transparently. If you keep using the Flink Kafka 0.8 connector with

Flink Kafka Connector behaviour on leader change and broker upgrade

2016-11-04 Thread Janardhan Reddy
HI, Does the flink kafka connector 0.8.2 handle broker's leader change gracefully since simple kafka consumer should be handling leader changes for a partition. How would the consumer behave when upgrading the brokers from 0.8 to 0.9. Thanks

Re: flink-kafka-connector offset management

2016-05-20 Thread Ufuk Celebi
Hey Arun! How did you configure your Kafka source? If the offset has been committed and you configured the source to read from the latest offset, the message should not be re-processed. – Ufuk On Fri, May 13, 2016 at 2:19 PM, Arun Balan wrote: > Hi, I am trying to use the flink-ka

flink-kafka-connector offset management

2016-05-13 Thread Arun Balan
Hi, I am trying to use the flink-kafka-connector and I notice that every time I restart my application it re-reads the last message on the kafka topic. So if the latest offset on the topic is 10, then when the application is restarted, kafka-connector will re-read message 10. Why is this the