RE: Flink kafka source with Avro Generic Record

2025-09-28 Thread Kamal Mittal via user
reflection. Rgds, KAmal From: Kamal Mittal via user Sent: 23 September 2025 19:03 To: Schwalbe Matthias ; user@flink.apache.org Subject: RE: Flink kafka source with Avro Generic Record Thanks for such a nice explanation. 1. Kryo is not used now as per attached classes (custom type information

Re: [Flink 2.0.0] SQL job fails with ClassNotFoundException: org.apache.flink.api.connector.sink2.StatefulSink

2025-09-27 Thread Shengkai Fang
cc @Hongshun tomi nader 于2025年9月18日周四 22:13写道: > Hello Flink community, > > I am encountering an issue when trying to run a SQL job on *Flink 2.0.0 > (standalone, macOS)* with a *Kafka sink (Docker).* > > *Setup:* > >- > >Flink 2.0.0 install

Inconsistent HistoryServer behavior under different job shutdown methods in Flink

2025-09-27 Thread lec ssmi
Hi: After enabling Flink’s HistoryServer, we observed that different ways of stopping a running job lead to different results in the HistoryServer: If we use cancel or stop with savepoint, in most cases the HistoryServer can display the basic information of the job (for example checkpoint status,

RE: Flink kafka source with Avro Generic Record

2025-09-27 Thread Schwalbe Matthias
: Tuesday, September 23, 2025 5:05 AM To: user@flink.apache.org Subject: [External] RE: Flink kafka source with Avro Generic Record ⚠EXTERNAL MESSAGE – CAUTION: Think Before You Click ⚠ Can someone please give input for below? From: Kamal Mittal mailto:kamal.mit...@ericsson.com>> Sent: 22 Sep

[ANNOUNCE] Apache Flink CDC 3.5.0 released

2025-09-26 Thread Yanquan Lv
The Apache Flink community is very happy to announce the release of Apache Flink CDC 3.5.0. Apache Flink CDC is a distributed data integration tool for real time data and batch data, bringing the simplicity and elegance of data integration via YAML to describe the data movement and transformation

Flink Kubernetes Operator, 1.13 Release timeline

2025-09-26 Thread Francis Altomare
Hello everyone, I’m currently testing the 1.13 version of the Flink operator by building it from source and deploying it to my cluster. I need this version of the operator since it supports Flink 2.1. From what I can tell everything is working great in this snapshot version. I wanted to

Re: Flink Kubernetes Operator, 1.13 Release timeline

2025-09-26 Thread Gabor Somogyi
, > > I’m currently testing the 1.13 version of the Flink operator by building > it from source and deploying it to my cluster. I need this version of the > operator since it supports Flink 2.1. From what I can tell everything is > working great in this snapshot version. > > I wanted

Re: Flink CDC: Random but persistent communications link failure

2025-09-25 Thread Peter Muller
y, Peter > > Very detailed issue description, there’re two possible reasons which may > lead to current case from my understanding. > > (1) The network issue, could you check network from Flink to MySQL via > telnet192.168.10.32 3306 in TaskManager? > (2) The Higher MySQL ve

Re: DeclaringAsyncKeyedProcessFunction's and Flink 2 async state APIs

2025-09-24 Thread Zakelly Lan
Hi Francis, You could inherit from the original `KeyedProcessFunction` to process async state. Please do remember to use the ForSt Statebackends, which is the only one that supports asynchronous state access. If your state size is huge, you will get the benefit. The `DeclaringAsyncKeyedProcessFun

DeclaringAsyncKeyedProcessFunction's and Flink 2 async state APIs

2025-09-24 Thread Francis Altomare
Hello everyone, I have a question about using the Async state APIs in a KeyedProcessFunction. I’m defining a pipeline like this, keyBy(KeySelector()).enableAsyncState().process(MyKeyedProcessFunction()). Inside MyKeyedProcessFunction I’m using the org.apache.flink.api.common.state.v2.ValueStat

RE: Flink kafka source with Avro Generic Record

2025-09-24 Thread Schwalbe Matthias
emulate a Kafka source, i.e. byte[] is ok here, however, you don’t to use byte[] in any other place throughout and use Flink API to your favor … * E.g. your DelayFunction can work directly with GenericRecord instead of byte[]: * Line 206, Line 216 … in that you don’t neet to serialize

Flink CDC: Random but persistent communications link failure

2025-09-23 Thread Peter Muller
Hi everybody, I am trying to replicate a MySQL database into StarRocks using a Flink CDC pipeline. The Flink job loads correctly, starts the incremental snapshot and transfers multiple GB (sometimes even multiple hundred GBs) into the sink DB. Then, after an arbitrary duration of time (it ranges

RE: Flink kafka source with Avro Generic Record

2025-09-22 Thread Kamal Mittal via user
Hello, I tried this and Flink fails later, when it tries to serialize/deserialize the GenericRecord object for communication between operators (e.g. from map() to another operator, or writing checkpoints, or shuffling). it's a serialization issue during operator chaining or data exchan

Flink kafka source with Avro Generic Record

2025-09-22 Thread Kamal Mittal via user
Hello, I need to support Flink application accepting avro binary events with different schemas over flink kafka source. Need to use custom schema registry to fetch schema at runtime and decode the incoming event. Will use Avro Generic Record to decode incoming event with different avro

Re: Is there a Confluent Cloud Kafka connector for Protobuf with Schema Registry support when using the Flink SQL API?

2025-09-20 Thread George Leonard
Will check Pretty sure I did a blog that does exactly this. Sent from my iPhone George Leonard __ george...@gmail.com +27 82 655 2466 > On 12 Sep 2025, at 11:29, Javad Saljooghi wrote: > >  > Hi, > > I deployed Flink on Kubernetes (running on AWS EC

Re: Flink datatype utils - safe to use?

2025-09-20 Thread Tom Cooper
Hi Nic, It would be good to get some clarity on those classes. Several methods from DataTypeUtils were used in Flink Connector Kafka and when I migrated the connector to Flink 2.1 [1] I had to deal with the fact that some of those methods had been removed [2] without first being deprecated

Re: Postgresql cdc for Flink 2.0

2025-09-18 Thread George
Hi all / Hong. Slight contradiction picked up, currently Flink CDC 3.4 (Postgres) is not compatible with Flink 2.x https://nightlies.apache.org/flink/flink-cdc-docs-release-3.4/docs/connectors/flink-sources/overview/#supported-flink-versions So the setting above wont make a difference. G On

Re: Postgresql cdc for Flink 2.0

2025-09-18 Thread George
'adults' ,'slot.name' = 'adults0' ,'scan.incremental.snapshot.enabled' = 'true' -- experimental feature: incremental snapshot (default off) ,'scan.startup.mode' = 'initial' -- https://nightlies.apache.

[Flink 2.0.0] SQL job fails with ClassNotFoundException: org.apache.flink.api.connector.sink2.StatefulSink

2025-09-18 Thread tomi nader
Hello Flink community, I am encountering an issue when trying to run a SQL job on *Flink 2.0.0 (standalone, macOS)* with a *Kafka sink (Docker).* *Setup:* - Flink 2.0.0 installed locally (~/apps/flink-2.0.0) - Kafka/Zookeeper running via Docker Compose - Job defined in

Re: Postgresql cdc for Flink 2.0

2025-09-17 Thread George
Flink 1.20.x does not allow me to unpack/repack the json as a string into the array of rows as a complex structure... my inbound json payload is packed into a column called "data" as type jsonb... with the 1.20.x functionality I can't unpack jsonify it... and the functionality ne

Flink autoscaler is not working

2025-09-17 Thread Sachin Mittal
Hi, I am trying to figure out why AutoScaler is not working. I have deployed my Flink job on Yarn using AWS EMR. My configurations are: job.autoscaler.enabled: 'true' job.autoscaler.decision.interval: 10m job.autoscaler.stabilization.interval: 50m job.autoscaler.vertex.min-paral

Re: Postgresql cdc for Flink 2.0

2025-09-17 Thread George
#x27;slot.name' = 'adults0' ,'scan.incremental.snapshot.enabled' = 'true' -- experimental feature: incremental snapshot (default off) ,'scan.startup.mode' = 'initial' -- https://nightlies.apache.org/flink/flink-cdc-docs-release-3.1/docs/connectors/fli

Re: Postgresql cdc for Flink 2.0

2025-09-17 Thread George Leonard
, Hongshun Wang wrote:Hi George,Please use ,'scan.incremental.snapshot.enabled'    = 'true' . The old SouceFunction has been removed in flink 2.0. Otherwise, you can use flink 1.20.Best,HongshunOn Sun, Sep 14, 2025 at 11:48 PM George <george...@gmail.com> wrote:Hi all...

Re: Postgresql cdc for Flink 2.0

2025-09-17 Thread Hongshun Wang
Hi George, Please use ,'scan.incremental.snapshot.enabled'= 'true' . The old SouceFunction has been removed in flink 2.0. Otherwise, you can use flink 1.20. Best, Hongshun On Sun, Sep 14, 2025 at 11:48 PM George wrote: > Hi all... > > Below is the jars includ

Migrating savepoints from Flink 1.20.2 to Flink 2.0

2025-09-17 Thread nick toker
Hi, We are in the process of upgrading our Flink applications from Flink 1.20.2 to Flink 2.0. Our application is running in production environments on various sites. We have not found a working/reliable procedure for migrating the savepoints from Flink 1.20.2 to Flink 2.0, even though we

Target date for Flink Kubernetes Operator release (1.12.3)

2025-09-15 Thread Andrea Sella
Dear users, is there a plan for an official release (1.12.3) for the Kubernetes operator any time soon? We have hit a roadblock for migrating the Flink workload from Kubernetes 1.32 to 1.33, see https://lists.apache.org/thread/1xk9s1bzxwt3nqk1vs817t4crw9lo3nw. We will run our own custom version

Postgresql cdc for Flink 2.0

2025-09-14 Thread George
Hi all... Below is the jars included in my flink 2.0 build. and then the catalog create and the query... If I drop flink to 1.20.2 and associated jars then all works, but for 2.0 I'm a bit stuck... *Dockerfile* RUN echo "--> Install JARs: Flink's S3 plugin" &&

Re: Is there a Confluent Cloud Kafka connector for Protobuf with Schema Registry support when using the Flink SQL API?

2025-09-13 Thread George Leonard
t; > Sent from my iPhone > > George Leonard > __ > george...@gmail.com > +27 82 655 2466 > >>> On 12 Sep 2025, at 11:29, Javad Saljooghi wrote: >>> >>  >> Hi, >> >> I deployed Flink on Kubernetes (running on

Flink Kubernetes Operator problems

2025-09-13 Thread Nikola Milutinovic
Hi all. It seems that we have problems when we try to create a group of Flink Session Jobs, the Operator first runs into some timeouts, and most jobs go into RECONCILING state. Then they do manage to get to running state, but when we inspect Flink UI, we can see that some of the jobs are

Is there a Confluent Cloud Kafka connector for Protobuf with Schema Registry support when using the Flink SQL API?

2025-09-12 Thread Javad Saljooghi
Hi, I deployed Flink on Kubernetes (running on AWS EC2) using the Flink K8s operator v1.2.0 and a custom Dockerfile based on the image flink:2.0.0-java21. My messages are in Kafka (Confluent Cloud), serialized with Protobuf and registered in the Confluent Schema Registry. I was able to connect

Flink datatype utils - safe to use?

2025-09-12 Thread Nic Townsend
Hi, there are two useful utilities classes: * https://nightlies.apache.org/flink/flink-docs-release-1.20/api/java//org/apache/flink/table/types/utils/DataTypeUtils.html * https://nightlies.apache.org/flink/flink-docs-release-1.20/api/java//org/apache/flink/table/types/logical/utils

Flink OpenSearch Connector Support OpenSearch Serverless

2025-09-11 Thread muhammad siddique
Hi Flink Community, I'm trying to use the Apache Flink OpenSearch connector to sink data into an AWS OpenSearch Serverless collection. I've made some attempts, but my Flink job consistently fails with a 403 Forbidden error when trying to connect to the OpenSearch endpoint. This makes

Re: Flink Kubernetes Operator problems

2025-09-10 Thread Gyula Fóra
the operator logic to try to figure out what's going on. Not sure how to mitigate it unless we know what causes it. Cheers Gyula On Wed, Sep 10, 2025 at 5:25 PM Nikola Milutinovic wrote: > Hi all. > > It seems that we have problems when we try to create a group of Flink >

Flink Metrics - Histogram

2025-09-07 Thread Ben Amiel
Hello everyone. I've been using Flink much to my pleasure. However, when trying to metric my application I have encountered hardships. From what I've seen the Prometheus exporter is reporting Flink's Histogram metric as a Summary. Unfortunately, this is a deal breaker for me when

[ANNOUNCE] flink-mcp: MCP Server for Apache Flink SQL Gateway

2025-09-03 Thread Maciej Bryński
Hi Flink Community, I'd like to share *flink-mcp*, an MCP (Model Context Protocol) server that connects Apache Flink with AI assistants and other MCP-compatible tools. *What it does:* flink-mcp enables AI assistants to interact directly with your Flink clusters through the SQL Gateway RES

Apache Flink Catalogs

2025-09-01 Thread George
Hi all I am using a postgres store atm via a jdbc connector to create my type = Paimon catalog, as per below, this is for my outbound data flow. CREATE CATALOG c_paimon WITH ( 'type' = 'paimon' ,'metastore' = 'jdbc' ,'catalog-key' = 'jdbc' -- PostgreSQL JDBC connection ,'uri' = 'jdbc:postgresql:/

Re: Flink Sink when using AsyncSinkBase

2025-09-01 Thread Ahmed Hamdy
ast-once*. >>> >>> From my understanding, exactly-once might not really fit this model. >>> IMO, to make exactly-once work, the sink has to stop at each >>> checkpoint, wait for all outstanding requests to finish, and force writes >>> to happen in order

Re: Flink Sink when using AsyncSinkBase

2025-08-31 Thread Mark Zitnik
rk, the sink has to stop at each checkpoint, wait >> for all outstanding requests to finish, and force writes to happen in order >> so you can clearly separate "records before this checkpoint" from "records >> after it." At that point you’re basically running sync

Re: Flink Sink when using AsyncSinkBase

2025-08-28 Thread Ahmed Hamdy
m "records > after it." At that point you’re basically running synchronously, so the > whole async throughput benefit is gone. > > For systems that actually support transactions (Kafka EOS, JDBC XA, File > sinks, etc.), Flink has separate *committing sink* designs that are

Re: Flink Sink when using AsyncSinkBase

2025-08-27 Thread Poorvank Bhatia
point you’re basically running synchronously, so the whole async throughput benefit is gone. For systems that actually support transactions (Kafka EOS, JDBC XA, File sinks, etc.), Flink has separate *committing sink* designs that are a better fit. That’s my read of it — but I might be missing some

Flink Sink when using AsyncSinkBase

2025-08-27 Thread Mark Zitnik
Hi Reading the AsyncSinkBase source code, I encountered a section of limitations. One of the points is "We are not considering support for exactly-once semantics at this point." Can someone share the reason for this? Any plans to develop it in the future? Regards, Mark

RE: Sharing data among flink operator pipeline

2025-08-25 Thread Kamal Mittal via user
Ok thanks. From: Kyle Lahnakoski Sent: 25 August 2025 18:51 To: Kamal Mittal ; user@flink.apache.org Subject: Re: Sharing data among flink operator pipeline You don't often get email from kyle.lahnako...@xe.com. Learn why this is important<https://aka.ms/LearnAboutSenderIdentificatio

Re: Sharing data among flink operator pipeline

2025-08-25 Thread Kyle Lahnakoski via user
operators to execute on the same task thread (operator chain), avoid shuffles (keyBy, rebalance, etc.) and keep parallelism equal so Flink uses forward partitioning and chaining. From: Kamal Mittal via user Date: Monday, August 25, 2025 at 7:57 AM To: Nikola Milutinovic , user

RE: Sharing data among flink operator pipeline

2025-08-25 Thread Kamal Mittal via user
No, I am not talking about Kafka Message Key. Rather key/value pair which comes separately as below. Headers org.apache.kafka.clients.consumer.ConsumerRecord.headers() From: Nikola Milutinovic Sent: 25 August 2025 14:56 To: user@flink.apache.org Subject: Re: Sharing data among flink operator

Re: Sharing data among flink operator pipeline

2025-08-25 Thread shenzhongwei
unsubscribe 原始邮件 发件人:Nikola Milutinovic

Re: Sharing data among flink operator pipeline

2025-08-25 Thread Nikola Milutinovic
: Thursday, August 21, 2025 at 11:35 AM To: user@flink.apache.org Subject: Sharing data among flink operator pipeline Hello, I have a scenario like for kafka source where kafka headers also come along with kafka record/event. Kafka headers fetched need to share/pass to next/parallel operators in

[ANNOUNCE] Apache flink-connector-kafka 4.0.1 release

2025-08-22 Thread Fabian Paul
The Apache Flink community is very happy to announce the release of Apache flink-connector-kafka 4.0.1. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications. The release is available for download

Re: Sharing data among flink operator pipeline

2025-08-21 Thread shenzhongwei
unsubscribe 原始邮件 发件人:Pedro Mázala

Re: Sharing data among flink operator pipeline

2025-08-21 Thread Pedro Mázala
Hello there, Kamal! Whenever I need to keep passing data downstream, I reflect it on my POJO. Usually, I call those fields metadata and access them downstream. The state will also segment data by operator, in other words, an operator cannot access stored data from other operators (unless you use b

Sharing data among flink operator pipeline

2025-08-21 Thread Kamal Mittal via user
Hello, I have a scenario like for kafka source where kafka headers also come along with kafka record/event. Kafka headers fetched need to share/pass to next/parallel operators in pipeline. So, is there any way to share data across operator pipeline? Explored keyed state which has limitation th

Consistent RTE Job failures for Flink Job

2025-08-11 Thread Amrit Sarkar
To the community, Our Flink task managers are currently experiencing frequent swapping in and out due to Karpenter autoscaling. This is causing task managers to repeatedly encounter RemoteTransportException (RTE), leading to job failures. We are looking for the ideal configuration to prevent

Re: What is the best way to play with json in flink?

2025-08-09 Thread Zhanghao Chen
rowse/JDK-8275863 Best, Zhanghao Chen From: Xeno Amess Sent: Saturday, August 9, 2025 22:07 To: Zhanghao Chen Cc: user@flink.apache.org Subject: Re: What is the best way to play with json in flink? I notice you mean especially in jdk 17+, is there any more detail

Re: What is the best way to play with json in flink?

2025-08-09 Thread Xeno Amess
I notice you mean especially in jdk 17+, is there any more details about it? right now we be using flink 1.14 with jdk 8, so I do have a plan to do a whole upgradation(means to update to flink 2.1 with jdk 24/25/graalvm 24). Zhanghao Chen 于2025年8月9日周六 21:56写道: > We find fastjson2 much fas

Re: What is the best way to play with json in flink?

2025-08-09 Thread Zhanghao Chen
We find fastjson2 much faster than Jackson, especially with JDK17+. Best, Zhanghao From: Xeno Amess Sent: Friday, August 8, 2025 6:17:31 PM To: user@flink.apache.org Subject: What is the best way to play with json in flink? right now we use fastjson2 (in

Consistent RTE Job failures for Flink Job

2025-08-08 Thread Amrit Sarkar
To the community, Our Flink task managers are currently experiencing frequent swapping in and out due to Karpenter autoscaling. This is causing task managers to repeatedly encounter RemoteTransportException (RTE), leading to job failures. We are looking for the ideal configuration to prevent

What is the best way to play with json in flink?

2025-08-08 Thread Xeno Amess
right now we use fastjson2 (in Collector, Source, etc), but as there be an embedded jackson provided in flink, we doubt if it be worthy to migrate to using it instead. we take care of performance very much so if there be some performance benchmark please let us know... Also I'm kind of con

Re: What is the best way to play with json in flink?

2025-08-08 Thread Xeno Amess
Thanks to the community guys anyway. Xeno Amess 于2025年8月8日周五 18:17写道: > right now we use fastjson2 (in Collector, Source, etc), but as there be an > embedded jackson provided in flink, we doubt if it be worthy to migrate to > using it instead. > we take care of performance very much

Re: [ANNOUNCE] Apache Flink 2.1.0 released

2025-08-04 Thread Jiangang Liu
; > On Thu, Jul 31, 2025 at 8:03 PM Lincoln Lee > > > wrote: > > > > > >> Congratulations! Thanks for driving this! > > >> > > >> Best, > > >> Lincoln Lee > > >> > > >> > > >> Leonard Xu 于2025年7月3

Re: [ANNOUNCE] Apache Flink 2.1.0 released

2025-08-04 Thread Jacky Lau
; >> > >> Leonard Xu 于2025年7月31日周四 15:36写道: > >> > >> > Nice! Thanks Ron and all involved for the great work. > >> > > >> > Best, > >> > Leonard > >> > > >> > > 2025 7月 31 14:30,Ron Liu 写道: > >> >

Webinar - Apache Flink Production Monitoring & Optimizations Tips

2025-08-04 Thread Itamar Syn-Hershko via user
Hi folks, Wanted to invite you all to a public webinar we are holding next week - https://www.linkedin.com/events/7357004263773872128/ In this webinar, we’ll share practical insights and proven strategies for monitoring Flink clusters, identifying bottlenecks, and optimizing resource usage. We

Re: [ANNOUNCE] Apache Flink 2.1.0 released

2025-07-31 Thread xiangyu feng
ulations! Thanks for driving this! >> >> Best, >> Lincoln Lee >> >> >> Leonard Xu 于2025年7月31日周四 15:36写道: >> >> > Nice! Thanks Ron and all involved for the great work. >> > >> > Best, >> > Leonard >> > >> &

Re: [ANNOUNCE] Apache Flink 2.1.0 released

2025-07-31 Thread Zakelly Lan
ll involved for the great work. > > > > Best, > > Leonard > > > > > 2025 7月 31 14:30,Ron Liu 写道: > > > > > > The Apache Flink community is very happy to announce the release of > > Apache > > > Flink 2.1.0, which is the first rel

[ANNOUNCE] Apache Flink 2.1.0 released

2025-07-30 Thread Ron Liu
The Apache Flink community is very happy to announce the release of Apache Flink 2.1.0, which is the first release for the Apache Flink 2.1 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

Re: Is it acceptable to create a FromCollectionSource in flink-runtime/flink-runtime-java?

2025-07-30 Thread Xeno Amess
seems there be FLINK-33326 is there people handling it now? if not, I would like to have a try... Xeno Amess 于2025年7月30日周三 18:49写道: > Hi. > I see the `fromCollection` impl, and find out it actually use > `FromIteratorFunction`, which be a `SourceFunction`, witch be > deprecated&a

Is it acceptable to create a FromCollectionSource in flink-runtime/flink-runtime-java?

2025-07-30 Thread Xeno Amess
Hi. I see the `fromCollection` impl, and find out it actually use `FromIteratorFunction`, which be a `SourceFunction`, witch be deprecated&internal. I wonder if it be valuable to create a class like this, but using the new `Source` interface.

Re: Issue with Special Characters in userProgramArguments When Submitting Flink Application to YARN

2025-07-27 Thread Shengkai Fang
hi, Which verison do you use? Could you share the whole exception stack? Best, Shengkai 李 琳 于2025年7月8日周二 23:06写道: > > Hi all, > > I used the following code to create an applicationConfiguration for > submitting a Flink job to a YARN cluster in application mode: > > *App

Re:Re: Where is flink 2.1?

2025-07-23 Thread Xuyang
ion of 2.2. https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details about the 2.1 release. On 23 Jul 2025, at 12:42, Xeno Amess wrote: (though I know we don't really follow semver at backward compa, at least we can try the 3-parts version number...like 2.1.0...or

Re: Where is flink 2.1?

2025-07-23 Thread Xeno Amess
k on several versions at the same time. You >> were looking at the 2.1 notes in a snapshot version of 2.2. >> >> https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides >> details about the 2.1 release. >> >> >> >> On 23 Jul 2025, at 12:4

Re: Where is flink 2.1?

2025-07-23 Thread Xeno Amess
i.apache.org/confluence/display/FLINK/2.1+Release provides > details about the 2.1 release. > > > > On 23 Jul 2025, at 12:42, Xeno Amess wrote: > > (though I know we don't really follow semver at backward compa, at least > we can try the 3-parts version number...like 2.1.0...or

Flink Java17-based incompatibility with Kubernetes Operator

2025-07-23 Thread Nikola Milutinovic
Hello all. Just wanted to give a heads-up that Flink 1.20 images based on Java 17 have problems with launching jobs via Flink Kubernetes Operator. Those based on Java 11 work. We are running a Flink Session cluster on Kubernetes, deploying it using Flink K8s Operator. Our session cluster was

Re: Where is flink 2.1?

2025-07-23 Thread Frens Jan Rumph
The project can of course work on several versions at the same time. You were looking at the 2.1 notes in a snapshot version of 2.2. https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details about the 2.1 release. > On 23 Jul 2025, at 12:42, Xeno Amess wr

Issue on Flink CDC with Oracle CDB + PDB

2025-07-21 Thread hlx98007
Flink 1.20.2 tgz package with oracle-cdc 3.4 and ojdbc17.jar downloaded from official website. Here are my steps: -- Step 1: Setup XE to enable archive logs alter system set db_recovery_file_dest_size = 10G; alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area&#

Flink K8S operator JobHistory NullPointerException

2025-07-21 Thread Dominik Wosiński
Hey, I'm trying to run Flink K8S Operator with version 1.12. The Flink version I'm trying to use is 1.17.1, generally the deployment looks & work correctly, however if at any point the job gets restarted because of the exception, the operator starts can't correctly deserial

Flink Kubernetes Operator Fails on Kubernetes 1.33 - fabric8 Client Issue

2025-07-15 Thread André Midea Jasiskis
Hi Flink community, We've encountered a compatibility issue with the Flink Kubernetes operator on Kubernetes 1.33 clusters. The operator fails reconciliation with the following error: Unrecognized field "emulationMajor" (class io.fabric8.kubernetes.client.VersionInfo), not marked

Re: Flink 2.0 Migration RockDBStateBackend

2025-07-15 Thread Han Yin
Hi patricia, ```setPredefinedOptions ``` can still be found in EmbeddedRocksDBStateBackend in Flink 2.0. You can set the option programmatically via config: config.set( RocksDBOptions.PREDEFINED_OPTIONS, PredefinedOptions.FLASH_SSD_OPTIMIZED.name()); Best, Han Yin > 2025年7月15日 13

Flink 2.0 Migration RockDBStateBackend

2025-07-14 Thread patricia lee
Hi, We are migrating our Flink 1.20 to Flink 2.0. We have set the code in our project like this: RockDBStateBackend rockDb = new RockDBStateBackend(config.getPath() + "/myrockdbpath"); rockdb.setPredefinedOptions(PredefinedOptions.FLASH_SSD_OPTIMIZED) We followed the new way in Fl

Re: [ANNOUNCE] Apache Flink 1.20.2 released

2025-07-11 Thread Sergey Nuyanzin
Congratulations to everyone involved! Thank you Ferenc for managing the release! On Fri, Jul 11, 2025 at 1:15 PM Ferenc Csaky wrote: > > The Apache Flink community is very happy to announce the release of Apache > Flink 1.20.2, which is the second bugfix release for the Apache F

Re: [ANNOUNCE] Apache Flink 1.19.3 released

2025-07-11 Thread Sergey Nuyanzin
Congratulations to everyone involved! Thank you Ferenc for managing the release! On Fri, Jul 11, 2025 at 1:13 PM Ferenc Csaky wrote: > > The Apache Flink community is very happy to announce the release of Apache > Flink 1.19.3, which is the third bugfix release for the Apache F

[ANNOUNCE] Apache Flink 1.20.2 released

2025-07-11 Thread Ferenc Csaky
The Apache Flink community is very happy to announce the release of Apache Flink 1.20.2, which is the second bugfix release for the Apache Flink 1.20 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

[ANNOUNCE] Apache Flink 1.19.3 released

2025-07-11 Thread Ferenc Csaky
The Apache Flink community is very happy to announce the release of Apache Flink 1.19.3, which is the third bugfix release for the Apache Flink 1.19 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

[ANNOUNCE] Apache Flink Kubernetes Operator 1.12.1 released

2025-07-11 Thread Gyula Fóra
The Apache Flink community is very happy to announce the release of Apache Flink Kubernetes Operator 1.12.1. The Flink Kubernetes Operator allows users to manage their Apache Flink applications and their lifecycle through native k8s tooling like kubectl. The release is available for download at

Re: Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread Zhanghao Chen
Yes, it's possible. You may refer to the example here: https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/types_serialization/#most-frequent-issues Best, Zhanghao Chen From: patricia lee Sent: Thursday, Ju

Re: Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread patricia lee
ew config > pipeline.serialization-config [1]. I've created a new JIRA issue [2] to fix > the doc. > > [1] > https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config > [2] https://issues.apache.org/jira/browse/FLINK

Re: Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread Zhanghao Chen
Hi Patricia, You may register the type using the new config pipeline.serialization-config [1]. I've created a new JIRA issue [2] to fix the doc. [1] https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config [2] https://issues.apach

Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread patricia lee
Hi, We are currently migrating from Flink version 1.18 to Flink version 2.0. We have this configuration: StreamExecutionEnvironment env = new StreamExecutionEnvironment(); env.setRegisterTypes(MyClass.class); In flink 2.0, if our understanding is correct, we'll use this registerPoj

Flink 2.0 registerTypes question

2025-07-10 Thread patricia lee
Hi, We are currently migrating our flink projects from version 1.18 to 2.0. We have this part of the codes that we set the model in the StreamExecutionEnvironment env = new StreamExecutionEnvironment(); env.setRegisterTypes(MyClass.class); Right now in Flink 2.0, I followed this

Issue with Special Characters in userProgramArguments When Submitting Flink Application to YARN

2025-07-08 Thread 李 琳
Hi all, I used the following code to create an applicationConfiguration for submitting a Flink job to a YARN cluster in application mode: ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(userProgramArguments, userMainClass); However, I

Re: Flink Example

2025-07-07 Thread Jeremie Doehla
t;4.0.0-2.0" lazy val flink_simple_testing = project .in(file("flink_simple_testing")) .settings( name := "flink-testing", libraryDependencies ++= Seq( "org.apache.flink" % "flink-core" % flinkVersion, "org.apache.flink" % &

Flink Example

2025-07-03 Thread Jeremie Doehla
ere: https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/dev/datastream/overview/ Updated slightly to this: import org.apache.flink.api.common.functions.FlatMapFunction; import org.apache.flink.api.java.tuple.Tuple2; import org.apache.flink.streaming.api.datastream.DataStre

Flink autoscaler on Kubernetes

2025-07-03 Thread Sachin Mittal
Hi I just wanted to know if we have to do some special configurations at the infrastructure provider to scale up the job. For example if I add more nodes in my node pool of my k8s cluster, the task managers are not automatically scaled up. Now I have also run Flink on AWS EMR where it uses Yarn

Query: Geo-Redundancy with Apache Flink on Kubernetes & Replicated Checkpoints !

2025-07-03 Thread Sachin
Dear Apache Flink Community, I hope you're doing well. We are currently operating a Flink deployment on *Kubernetes*, with *high availability (HA) configured using Kubernetes-based HA services*. We're exploring approaches for *geo-redundancy (GR)* to ensure disaster recovery and fault

Re: Flink Operator v1.12 breaks jobs upgrades

2025-06-30 Thread Salva Alcántara
Hey Gyula! I created https://issues.apache.org/jira/browse/FLINK-38033 I will add logs / provide more details when I have time... On Mon, Jun 30, 2025 at 6:18 PM Gyula Fóra wrote: > Hey! > > Could you please open a JIRA ticket with the operator logs so we can > investigate? > &

Re: Flink Operator v1.12 breaks jobs upgrades

2025-06-30 Thread Salva Alcántara
{ "emoji": "👍", "version": 1 }

Re: Flink Operator v1.12 breaks jobs upgrades

2025-06-30 Thread Gyula Fóra
rg/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq > > Regards, > > Salva > > On 2025/06/27 15:21:47 Salva Alcántara wrote: > > I was running Flink Kubernetes Operator v1.11 with a minor tweak, see > more > > here: > > - > > > https://www.project-syndicate.org/comm

RE: Flink Operator v1.12 breaks jobs upgrades

2025-06-27 Thread Salva Alcántara
Sorry, I meant this link (for the operator tweak which I indeed applied in v1.10, not v.11): - https://lists.apache.org/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq Regards, Salva On 2025/06/27 15:21:47 Salva Alcántara wrote: > I was running Flink Kubernetes Operator v1.11 with a minor tweak,

Flink Operator v1.12 breaks jobs upgrades

2025-06-27 Thread Salva Alcántara
I was running Flink Kubernetes Operator v1.11 with a minor tweak, see more here: - https://www.project-syndicate.org/commentary/ai-productivity-boom-forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05 and everything was fine. However, after upgrading to v1.12, the operator is

RE: Flink K8 Operator: New Snapshot CRD Not Working

2025-06-26 Thread Salva Alcántara
After upgrading the Flink Kubernetes Operator from v1.11 to v1.12 upgrades started to fail in all my jobs with the following error message: ``` Error during event processing ExecutionScope{ resource id: ResourceID{name='my-job-checkpoint-periodic-1741010907590', namespace='plat

Errors on adding job to flink run

2025-06-25 Thread George
hi all ok, so I'm using the follow files in my parent pom file. The flink source connectors intention is to create snmp agents/targets. user flink sql to define a table, that will be scraped. at a interval specified. 1.20.1 17 3.11.0 3.5.2 ${java.version} ${java.version} UTF-8 1.7.36 2

Re: Flink SQL Plain avro schema deserialization

2025-06-20 Thread Yarden BenMoshe
Hi all, In case it helps someone in the future, I wanted to share that I was finally able to identify the problem and create a fix that works for me. As mentioned in earlier messages, the default Avro behavior in Flink generates a schema based on the table definition. This includes some

Interest Check: Flink-Pinot Connector with SQL & Upsert Support

2025-06-18 Thread Poorvank Bhatia
Following up on a proposal I shared in the dev thread <https://lists.apache.org/thread/sov9vg1jl0tnv4d1857sk69s96dcnwq1> — we’re working on externalizing and modernizing the Flink-Pinot connector (currently in Apache Bahir, which is now in the Attic). The plan includes: - Movi

  1   2   3   4   5   6   7   8   9   10   >