Data, Analytics & AI Engineer III
From: patricia lee
Date: Thursday, 3 October 2024 at 15:27
To: user@flink.apache.org
Subject: Flink 1.20 and Flink Kafka Connector 3.2.0-1.19
Be aware: This is an external email.
Hi,
I have upgraded our project to Flink 1.20 and JDK 17. But I noticed ther
Hi,
I have upgraded our project to Flink 1.20 and JDK 17. But I noticed there
is no Kafka connector for Flink 1.20.
I currently used the versions but there is intermittent error of Kafka
related No Class Definition Error
Where can I get the Kafka connector for flink 1.20? Thanks
t; Best Regards
>>> Ahmed Hamdy
>>>
>>>
>>> On Fri, 10 May 2024 at 18:14, Aniket Sule >> <mailto:aniket.s...@netwitness.com>> wrote:
>>>> Hello,
>>>>
>>>> On the Flink downloads page, the latest stable
et Sule
> wrote:
>
>> Hello,
>>
>> On the Flink downloads page, the latest stable version is Flink 1.19.0.
>> However, the Flink Kafka connector is v 3.1.0, that is compatible with
>> 1.18.x.
>>
>> Is there a timeline when the Kafka connector fo
e:
>> Hello,
>>
>> On the Flink downloads page, the latest stable version is Flink 1.19.0.
>> However, the Flink Kafka connector is v 3.1.0, that is compatible with
>> 1.18.x.
>>
>> Is there a timeline when the Kafka connector for v 1.19 will be re
at 18:14, Aniket Sule
wrote:
> Hello,
>
> On the Flink downloads page, the latest stable version is Flink 1.19.0.
> However, the Flink Kafka connector is v 3.1.0, that is compatible with
> 1.18.x.
>
> Is there a timeline when the Kafka connector for v 1.19 will be released?
&g
Hello,
On the Flink downloads page, the latest stable version is Flink 1.19.0.
However, the Flink Kafka connector is v 3.1.0, that is compatible with 1.18.x.
Is there a timeline when the Kafka connector for v 1.19 will be released? Is it
possible to use the v3.1.0 connector with Flink v 1.19
hi.
You can use SQL API to parse or write the header in the Kafka record[1] if
you are using Flink SQL.
Best,
Shengkai
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka/#available-metadata
Yaroslav Tkachenko 于2022年10月13日周四 02:21写道:
> Hi,
>
> You can implemen
Hi,
You can implement a custom KafkaRecordDeserializationSchema (example
https://docs.immerok.cloud/docs/cookbook/reading-apache-kafka-headers-with-apache-flink/#the-custom-deserializer)
and just avoid emitting the record if the header value matches what you
need.
On Wed, Oct 12, 2022 at 11:04 AM
I have some flink applications that read streams from Kafka, now
the producer side code has introduced some additional information in Kafka
headers while producing records.
Now I need to change my consumer-side logic to process the records if the
header contains a specific value, if the header valu
essing.
issues.apache.org
Best,
Zhanghao Chen
From: Valentina Predtechenskaya
Sent: Wednesday, August 3, 2022 1:32
To: user@flink.apache.org
Subject: (Possible) bug in flink-kafka-connector (metrics rewriting)
Hello !
I would like to report a bug with m
/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaWriter.java#L359-L360
I have debugged these libraries a lot and I'm sure in that behavior. If, for
example, patch flink-kafka-connector with condition not to initialize metr
2021 5:23 PM
To: Taimoor Bhatti ; user@flink.apache.org
Subject: Re: Apache Flink Kafka Connector not found Error
Hi Taimoor,
It seems sometime IntelliJ does not works well for index, perhaps
you could choose mvn -> reimport project from the context menu,
if it still not work, perhaps you mi
Yun
--
From:Taimoor Bhatti
Send Time:2021 Jul. 19 (Mon.) 23:03
To:user@flink.apache.org ; Yun Gao
Subject:Re: Apache Flink Kafka Connector not found Error
Hello Yun,
Many thanks for the reply...
For some reason I'm not able t
mport org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
"mvn clean compile" works however... (Thanks...)
Do you know why IntelliJ doesn't see the import??
Best,
Taimoor
From: Yun Gao
Sent: Monday, July 19, 2021 3:25 PM
To: Taimoor Bhatti ; user@flink.apache.org
Subject: Re: Apache F
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
Best,
Yun
--
From:Taimoor Bhatti
Send Time:2021 Jul. 19 (Mon.) 18:43
To:user@flink.apache.org
Subject:Apache Flink Kafka Connector not found Error
I'm having some trouble with
I'm having some trouble with using the Flink DataStream API with the Kafka
Connector. There don't seem to be great resources on the internet which can
explain the issue I'm having.
My project is here: https://github.com/sysarcher/flink-scala-tests
I want to I'm unable to use FlinkKafkaConsumer
k.table.api.ValidationException: Could not find
>>>> the required schema in property 'schema'.
>>>>at
>>>> org.apache.flink.table.descriptors.SchemaValidator.validate(SchemaValidator.java:90)
>>>>at
>>>> org.apache.fli
bleSourceFactory.java:49)
>>> at
>>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:53)
>>> ... 13 more
>>>
>>>
>>> During handling of the above exception, another exception occurred:
>>&g
link/table/descriptors.py",
>> line 1295, in register_table_source
>> self._j_connect_table_descriptor.registerTableSource(name)
>> File
>> "/home/manas/anaconda3/envs/flink_env/lib/python3.7/site-packages/py4j/java_gateway.py",
>> line 1286, in __c
what's
> wrong with the schema. Is the schema not properly declared? Is some field
> missing?
>
> FWIW I have included the JSON and kafka connector JARs in the required
> location.
>
>
> Regards,
> Manas
>
>
> On Tue, Jun 30, 2020 at 11:58 AM Manas Kale
ead kafka
>> data. Besides, You can refer to the common questions about PyFlink[3]
>>
>> [1]
>> https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/create.html#run-a-create-statement
>> [2]
>> https://ci.apache.org/projects/flink/flink-docs-rele
t; I want to consume and write to Kafak from Flink's python API.
>>
>> The only way I found to do this was through this
>> <https://stackoverflow.com/questions/52744277/apache-flink-kafka-connector-in-python-streaming-api-cannot-load-user-class>
>> question
&g
.html
Best,
Xingbo
Manas Kale 于2020年6月29日周一 下午8:10写道:
> Hi,
> I want to consume and write to Kafak from Flink's python API.
>
> The only way I found to do this was through this
> <https://stackoverflow.com/questions/52744277/apache-flink-kafka-connector-in-python-streamin
Hi,
I want to consume and write to Kafak from Flink's python API.
The only way I found to do this was through this
<https://stackoverflow.com/questions/52744277/apache-flink-kafka-connector-in-python-streaming-api-cannot-load-user-class>
question
on SO where the user essentially copies
ld have more parallelism than kafka partitions,
> but the extra subtasks will not doing anything.
>
>
>
> *From: *"Chen, Mason"
> *Date: *Wednesday, May 27, 2020 at 11:09 PM
> *To: *"user@flink.apache.org"
> *Subject: *Flink Kafka Connector Source Pa
n"
Date: Wednesday, May 27, 2020 at 11:09 PM
To: "user@flink.apache.org"
Subject: Flink Kafka Connector Source Parallelism
Hi all,
I’m currently trying to understand Flink’s Kafka Connector and how parallelism
affects it. So, I am running the flink playground click count job and the
Hi all,
I’m currently trying to understand Flink’s Kafka Connector and how parallelism
affects it. So, I am running the flink playground click count job and the
parallelism is set to 2 by default.
However, I don’t see the 2nd subtask of the Kafka Connector sending any
records: https://imgur.c
yep. Glad to see the progress.
Best
At 2020-03-09 12:44:05, "Jingsong Li" wrote:
Hi Sunfulin,
I think this is very important too.
There is an issue to fix this[1]. Is that meet your requirement?
[1] https://issues.apache.org/jira/browse/FLINK-15396
Best,
Jingsong Lee
On Mon, Mar
Hi Sunfulin,
I think this is very important too.
There is an issue to fix this[1]. Is that meet your requirement?
[1] https://issues.apache.org/jira/browse/FLINK-15396
Best,
Jingsong Lee
On Mon, Mar 9, 2020 at 12:33 PM sunfulin wrote:
> hi , community,
> I am wondering if there is some config
hi , community,
I am wondering if there is some config params with error handler strategy as
[1] refers when defining a Kafka stream table using Flink SQL DDL. For example,
the following `json.parser.failure.strategy' can be set to `silencly skip`
that can skip the malformed dirty data proces
Hey Hemant,
Are you able to reconstruct the ordering of the event, for example based on
time or some sequence number?
If so, you could create as many Kafka partitions as you need (for proper
load distribution), disregarding any ordering at that point.
Then you keyBy your stream in Flink, and order
Hi Arvid,
Thanks for your response. I think I did not word my question properly.
I wanted to confirm that if the data is distributed to more than one
partition then the ordering cannot be maintained (which is documented).
According to your response I understand if I set the parallelism to number
o
Hi Hemant,
Flink passes your configurations to the Kafka consumer, so you could check
if you can subscribe to only one partition there.
However, I would discourage that approach. I don't see the benefit to just
subscribing to the topic entirely and have dedicated processing for the
different devi
Hello Flink Users,
I have a use case where I am processing metrics from different type of
sources(one source will have multiple devices) and for aggregations as well
as build alerts order of messages is important. To maintain customer data
segregation I plan to have single topic for each customer
Hi guys,
I am using flink connector for kakfa from 1.9.0
Her is my sbt dependency :
"org.apache.flink" %% "flink-connector-kafka" % "1.9.0",
When I check the log file I see that the kafka version is 0.10.2.0.
According to the docs it says that 1.9.0 onwards the version should be
2.2.0. Why do I
Thanks Fabian. This is really helpful.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Ben,
Flink correctly maintains the offsets of all partitions that are read by a
Kafka consumer.
A checkpoint is only complete when all functions successful checkpoint
their state. For a Kafka consumer, this state is the current reading offset.
In case of a failure the offsets and the state of a
Elias Thanks for your reply. In this case,
*When # of Kafka consumers = # of partitions, and I use setParallelism(>1),
something like this
'messageSteam.rebalance().map(lamba).setParallelism(3).print()'
*
If checkpointing is enabled, I assume Flink will commit the offsets in the
'right order' d
There is no such concept in Flink. Flink tracks offsets in its
checkpoints. It can optionally commit offsets to Kafka, but that is only
for reporting purposes. If you wish to lower the number of records that
get reprocessed in the case of a restart, then you must lower the
checkpoint interval.
Hi Flink users,
When # of Kafka consumers = # of partitions, and I use setParallelism(>1),
something like this
'messageSteam.rebalance().map(lamba).setParallelism(3).print()'
How do I tune # of outstanding uncommitted offset? Something similar to
https://storm.apache.org/releases/1.1.2/storm-k
Hi Kostas,
As far as I know you cannot just use java classes from within python
API. I think Python API does not provide wrapper for kafka connector. I
am adding Chesnay to cc to correct me if I am wrong.
Best,
Dawid
On 11/10/18 12:18, Kostas Evangelou wrote:
> Hey all,
>
> Thank you so much
Hey all,
Thank you so much for your efforts. I've already posted this question on
stack overflow, but thought I should ask here as well.
I am trying out Flink's new Python streaming API and attempting to run my
script with ./flink-1.6.1/bin/pyflink-stream.sh examples/read_from_kafka.py.
The pytho
/ReadFromKafka.java:[24,51]
package org.apache.flink.streaming.connectors.kafka does not exist
[ERROR]
/env/mvn/test-0.0.1/processing-app/src/main/java/com/test/alpha/ReadFromKafka.java:[52,63]
cannot find symbol
[ERROR] symbol: class FlinkKafkaConsumer011
Here is my "pom.xml" configuration for F
]
cannot find symbol*
*[ERROR] symbol: class FlinkKafkaConsumer011*
Here is my "pom.xml" configuration for Flink Kafka connector :
*1.4.2
1.8
2.11
${java.version}
${java.version}*
**
*org.apache.flink*
*
flink-connector-k
You will be able to use it. Kafka 1.10 has backwards compatibility with
v1.0, 0.11 and 0.10 connectors as far as I know.
On 13 April 2018 at 15:12, Lehuede sebastien wrote:
> Hi All,
>
> I'm very new in Flink (And on Streaming Application topic in general) so
> sorry if for my newbie question.
>
Hi All,
I'm very new in Flink (And on Streaming Application topic in general) so
sorry if for my newbie question.
I plan to do some test with Kafka and Flink and use the Kafka connector for
that.
I find information on this page :
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/co
Hi,
I'm no expert on Kafka here, but as the tasks are run on the worker
nodes (where the TaskManagers are run), please double-check whether the
file under /data/apps/spark/kafka_client_jaas.conf on these nodes also
contains the same configuration as on the node running the JobManager,
i.e. an appro
Hi ,all
I use the code below to set kafka JASS config, the serverConfig.jasspath is
/data/apps/spark/kafka_client_jaas.conf, but on flink standalone deployment,
it crashs. I am sure the kafka_client_jass.conf is valid, cause other
applications(Spark streaming) are still working fine with
Thanks, Gordan! Will keep an eye on that!
Connie
From: "Tzu-Li (Gordon) Tai"
Date: Monday, December 11, 2017 at 5:29 PM
To: Connie Yang
Cc: "user@flink.apache.org"
Subject: Re: Flink-Kafka connector - partition offsets for a given timestamp?
Hi Connie,
We do have a
this happens for Flink 1.5, for which the proposed
release date is around February 2018.
Cheers,
Gordon
On Tue, Dec 12, 2017 at 3:53 AM, Yang, Connie wrote:
> Hi,
>
>
>
> Does Flink-Kafka connector allow job graph to consume topoics/partitions
> from a specific timestamp?
>
Hi,
Does Flink-Kafka connector allow job graph to consume topoics/partitions from a
specific timestamp?
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java#L469
seems to
-1.2.0
>> Using the latest Kafka Connector - FlinkKafkaConsumer010 -
>> flink-connector-kafka-0.10_2.11
>>
>> Issue Faced: Not able to get the consumer offsets from Kafka when using
>> Flink with Flink-Kafka Connector
>>
>> $ /work/kafka_2.11-0.10.1.1/bi
offsets from Kafka when using
> Flink with Flink-Kafka Connector
>
> $ /work/kafka_2.11-0.10.1.1/bin/kafka-consumer-groups.sh
> --bootstrap-server localhost:9092 --list
> console-consumer-19886
> console-consumer-89637
> $
>
> It does not show the consumer group &quo
with Flink-Kafka Connector
$ /work/kafka_2.11-0.10.1.1/bin/kafka-consumer-groups.sh --bootstrap-server
localhost:9092 --list
console-consumer-19886
console-consumer-89637
$
It does not show the consumer group "test"
For a group that does not exist, the message is as follows:
$ /work/
#x27;m using Zeppelin built off master. And Flink 1.2
> > built off the release-1.2 branch. Is that the right branch?
> >
> > Neil
> >
> >
> >
> > --
> > View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble
-flink-user-mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-Kafka-Connector-tp3p9.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
Hi Timo & Fabian,
Thanks for replying. I'm using Zeppelin built off master. And Flink 1.2
built off the release-1.2 branch. Is that the right branch?
Neil
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-Kafka-
fka
> specifically?
>
>
>
> *From: *Fabian Hueske
> *Reply-To: *"user@flink.apache.org"
> *Date: *Tuesday, January 17, 2017 at 7:10 AM
> *To: *"user@flink.apache.org"
> *Subject: *Re: Zeppelin: Flink Kafka Connector
>
>
>
>
Are connectors being included in the 1.2.0 release or do you mean Kafka
specifically?
From: Fabian Hueske
Reply-To: "user@flink.apache.org"
Date: Tuesday, January 17, 2017 at 7:10 AM
To: "user@flink.apache.org"
Subject: Re: Zeppelin: Flink Kafka Connector
One thing to
One thing to add: Flink 1.2.0 has not been release yet.
The FlinkKafkaConsumer010 is only available in a SNAPSHOT release or the
first release candidate (RC0).
Best, Fabian
2017-01-17 16:08 GMT+01:00 Timo Walther :
> You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010
> was n
You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010
was not present at that time. You need to upgrade to Flink 1.2.
Timo
Am 17/01/17 um 15:58 schrieb Neil Derraugh:
This is really a Zeppelin question, and I’ve already posted to the user list
there. I’m just trying to dra
This is really a Zeppelin question, and I’ve already posted to the user list
there. I’m just trying to draw in as many relevant eyeballs as possible. If
you can help please reply on the Zeppelin mailing list.
In my Zeppelin notebook I’m having a problem importing the Kafka streaming
library f
Hi,
yes, the Flink Kafka connector for Kafka 0.8 handles broker leader changes
without failing. The SimpleConsumer provided by Kafka 0.8 doesn't handle
that.
The 0.9 Flink Kafka consumer also supports broker leader changes
transparently.
If you keep using the Flink Kafka 0.8 connector with
HI,
Does the flink kafka connector 0.8.2 handle broker's leader change
gracefully since simple kafka consumer should be handling leader changes
for a partition.
How would the consumer behave when upgrading the brokers from 0.8 to 0.9.
Thanks
Hey Arun!
How did you configure your Kafka source? If the offset has been
committed and you configured the source to read from the latest
offset, the message should not be re-processed.
– Ufuk
On Fri, May 13, 2016 at 2:19 PM, Arun Balan wrote:
> Hi, I am trying to use the flink-ka
Hi, I am trying to use the flink-kafka-connector and I notice that every
time I restart my application it re-reads the last message on the kafka
topic. So if the latest offset on the topic is 10, then when the
application is restarted, kafka-connector will re-read message 10. Why is
this the
67 matches
Mail list logo