Re: [ANNOUNCE] Apache Flink 2.1.0 released

2025-08-04 Thread Jacky Lau
; >> > >> Leonard Xu 于2025年7月31日周四 15:36写道: > >> > >> > Nice! Thanks Ron and all involved for the great work. > >> > > >> > Best, > >> > Leonard > >> > > >> > > 2025 7月 31 14:30,Ron Liu 写道: > >> >

Webinar - Apache Flink Production Monitoring & Optimizations Tips

2025-08-04 Thread Itamar Syn-Hershko via user
Hi folks, Wanted to invite you all to a public webinar we are holding next week - https://www.linkedin.com/events/7357004263773872128/ In this webinar, we’ll share practical insights and proven strategies for monitoring Flink clusters, identifying bottlenecks, and optimizing resource usage. We

Re: [ANNOUNCE] Apache Flink 2.1.0 released

2025-07-31 Thread xiangyu feng
ulations! Thanks for driving this! >> >> Best, >> Lincoln Lee >> >> >> Leonard Xu 于2025年7月31日周四 15:36写道: >> >> > Nice! Thanks Ron and all involved for the great work. >> > >> > Best, >> > Leonard >> > >> &

Re: [ANNOUNCE] Apache Flink 2.1.0 released

2025-07-31 Thread Zakelly Lan
ll involved for the great work. > > > > Best, > > Leonard > > > > > 2025 7月 31 14:30,Ron Liu 写道: > > > > > > The Apache Flink community is very happy to announce the release of > > Apache > > > Flink 2.1.0, which is the first rel

[ANNOUNCE] Apache Flink 2.1.0 released

2025-07-30 Thread Ron Liu
The Apache Flink community is very happy to announce the release of Apache Flink 2.1.0, which is the first release for the Apache Flink 2.1 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

Re: Is it acceptable to create a FromCollectionSource in flink-runtime/flink-runtime-java?

2025-07-30 Thread Xeno Amess
seems there be FLINK-33326 is there people handling it now? if not, I would like to have a try... Xeno Amess 于2025年7月30日周三 18:49写道: > Hi. > I see the `fromCollection` impl, and find out it actually use > `FromIteratorFunction`, which be a `SourceFunction`, witch be > deprecated&a

Is it acceptable to create a FromCollectionSource in flink-runtime/flink-runtime-java?

2025-07-30 Thread Xeno Amess
Hi. I see the `fromCollection` impl, and find out it actually use `FromIteratorFunction`, which be a `SourceFunction`, witch be deprecated&internal. I wonder if it be valuable to create a class like this, but using the new `Source` interface.

Re: Issue with Special Characters in userProgramArguments When Submitting Flink Application to YARN

2025-07-27 Thread Shengkai Fang
hi, Which verison do you use? Could you share the whole exception stack? Best, Shengkai 李 琳 于2025年7月8日周二 23:06写道: > > Hi all, > > I used the following code to create an applicationConfiguration for > submitting a Flink job to a YARN cluster in application mode: > > *App

Re:Re: Where is flink 2.1?

2025-07-23 Thread Xuyang
ion of 2.2. https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details about the 2.1 release. On 23 Jul 2025, at 12:42, Xeno Amess wrote: (though I know we don't really follow semver at backward compa, at least we can try the 3-parts version number...like 2.1.0...or

Re: Where is flink 2.1?

2025-07-23 Thread Xeno Amess
k on several versions at the same time. You >> were looking at the 2.1 notes in a snapshot version of 2.2. >> >> https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides >> details about the 2.1 release. >> >> >> >> On 23 Jul 2025, at 12:4

Re: Where is flink 2.1?

2025-07-23 Thread Xeno Amess
i.apache.org/confluence/display/FLINK/2.1+Release provides > details about the 2.1 release. > > > > On 23 Jul 2025, at 12:42, Xeno Amess wrote: > > (though I know we don't really follow semver at backward compa, at least > we can try the 3-parts version number...like 2.1.0...or

Flink Java17-based incompatibility with Kubernetes Operator

2025-07-23 Thread Nikola Milutinovic
Hello all. Just wanted to give a heads-up that Flink 1.20 images based on Java 17 have problems with launching jobs via Flink Kubernetes Operator. Those based on Java 11 work. We are running a Flink Session cluster on Kubernetes, deploying it using Flink K8s Operator. Our session cluster was

Re: Where is flink 2.1?

2025-07-23 Thread Frens Jan Rumph
The project can of course work on several versions at the same time. You were looking at the 2.1 notes in a snapshot version of 2.2. https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details about the 2.1 release. > On 23 Jul 2025, at 12:42, Xeno Amess wr

Issue on Flink CDC with Oracle CDB + PDB

2025-07-21 Thread hlx98007
Flink 1.20.2 tgz package with oracle-cdc 3.4 and ojdbc17.jar downloaded from official website. Here are my steps: -- Step 1: Setup XE to enable archive logs alter system set db_recovery_file_dest_size = 10G; alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area&#

Flink K8S operator JobHistory NullPointerException

2025-07-21 Thread Dominik Wosiński
Hey, I'm trying to run Flink K8S Operator with version 1.12. The Flink version I'm trying to use is 1.17.1, generally the deployment looks & work correctly, however if at any point the job gets restarted because of the exception, the operator starts can't correctly deserial

Flink Kubernetes Operator Fails on Kubernetes 1.33 - fabric8 Client Issue

2025-07-15 Thread André Midea Jasiskis
Hi Flink community, We've encountered a compatibility issue with the Flink Kubernetes operator on Kubernetes 1.33 clusters. The operator fails reconciliation with the following error: Unrecognized field "emulationMajor" (class io.fabric8.kubernetes.client.VersionInfo), not marked

Re: Flink 2.0 Migration RockDBStateBackend

2025-07-15 Thread Han Yin
Hi patricia, ```setPredefinedOptions ``` can still be found in EmbeddedRocksDBStateBackend in Flink 2.0. You can set the option programmatically via config: config.set( RocksDBOptions.PREDEFINED_OPTIONS, PredefinedOptions.FLASH_SSD_OPTIMIZED.name()); Best, Han Yin > 2025年7月15日 13

Flink 2.0 Migration RockDBStateBackend

2025-07-14 Thread patricia lee
Hi, We are migrating our Flink 1.20 to Flink 2.0. We have set the code in our project like this: RockDBStateBackend rockDb = new RockDBStateBackend(config.getPath() + "/myrockdbpath"); rockdb.setPredefinedOptions(PredefinedOptions.FLASH_SSD_OPTIMIZED) We followed the new way in Fl

Re: [ANNOUNCE] Apache Flink 1.20.2 released

2025-07-11 Thread Sergey Nuyanzin
Congratulations to everyone involved! Thank you Ferenc for managing the release! On Fri, Jul 11, 2025 at 1:15 PM Ferenc Csaky wrote: > > The Apache Flink community is very happy to announce the release of Apache > Flink 1.20.2, which is the second bugfix release for the Apache F

Re: [ANNOUNCE] Apache Flink 1.19.3 released

2025-07-11 Thread Sergey Nuyanzin
Congratulations to everyone involved! Thank you Ferenc for managing the release! On Fri, Jul 11, 2025 at 1:13 PM Ferenc Csaky wrote: > > The Apache Flink community is very happy to announce the release of Apache > Flink 1.19.3, which is the third bugfix release for the Apache F

[ANNOUNCE] Apache Flink 1.20.2 released

2025-07-11 Thread Ferenc Csaky
The Apache Flink community is very happy to announce the release of Apache Flink 1.20.2, which is the second bugfix release for the Apache Flink 1.20 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

[ANNOUNCE] Apache Flink 1.19.3 released

2025-07-11 Thread Ferenc Csaky
The Apache Flink community is very happy to announce the release of Apache Flink 1.19.3, which is the third bugfix release for the Apache Flink 1.19 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

[ANNOUNCE] Apache Flink Kubernetes Operator 1.12.1 released

2025-07-11 Thread Gyula Fóra
The Apache Flink community is very happy to announce the release of Apache Flink Kubernetes Operator 1.12.1. The Flink Kubernetes Operator allows users to manage their Apache Flink applications and their lifecycle through native k8s tooling like kubectl. The release is available for download at

Re: Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread Zhanghao Chen
Yes, it's possible. You may refer to the example here: https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/types_serialization/#most-frequent-issues Best, Zhanghao Chen From: patricia lee Sent: Thursday, Ju

Re: Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread patricia lee
ew config > pipeline.serialization-config [1]. I've created a new JIRA issue [2] to fix > the doc. > > [1] > https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config > [2] https://issues.apache.org/jira/browse/FLINK

Re: Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread Zhanghao Chen
Hi Patricia, You may register the type using the new config pipeline.serialization-config [1]. I've created a new JIRA issue [2] to fix the doc. [1] https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config [2] https://issues.apach

Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread patricia lee
Hi, We are currently migrating from Flink version 1.18 to Flink version 2.0. We have this configuration: StreamExecutionEnvironment env = new StreamExecutionEnvironment(); env.setRegisterTypes(MyClass.class); In flink 2.0, if our understanding is correct, we'll use this registerPoj

Flink 2.0 registerTypes question

2025-07-10 Thread patricia lee
Hi, We are currently migrating our flink projects from version 1.18 to 2.0. We have this part of the codes that we set the model in the StreamExecutionEnvironment env = new StreamExecutionEnvironment(); env.setRegisterTypes(MyClass.class); Right now in Flink 2.0, I followed this

Issue with Special Characters in userProgramArguments When Submitting Flink Application to YARN

2025-07-08 Thread 李 琳
Hi all, I used the following code to create an applicationConfiguration for submitting a Flink job to a YARN cluster in application mode: ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(userProgramArguments, userMainClass); However, I

Re: Flink Example

2025-07-07 Thread Jeremie Doehla
t;4.0.0-2.0" lazy val flink_simple_testing = project .in(file("flink_simple_testing")) .settings( name := "flink-testing", libraryDependencies ++= Seq( "org.apache.flink" % "flink-core" % flinkVersion, "org.apache.flink" % &

Flink Example

2025-07-03 Thread Jeremie Doehla
ere: https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/dev/datastream/overview/ Updated slightly to this: import org.apache.flink.api.common.functions.FlatMapFunction; import org.apache.flink.api.java.tuple.Tuple2; import org.apache.flink.streaming.api.datastream.DataStre

Flink autoscaler on Kubernetes

2025-07-03 Thread Sachin Mittal
Hi I just wanted to know if we have to do some special configurations at the infrastructure provider to scale up the job. For example if I add more nodes in my node pool of my k8s cluster, the task managers are not automatically scaled up. Now I have also run Flink on AWS EMR where it uses Yarn

Query: Geo-Redundancy with Apache Flink on Kubernetes & Replicated Checkpoints !

2025-07-03 Thread Sachin
Dear Apache Flink Community, I hope you're doing well. We are currently operating a Flink deployment on *Kubernetes*, with *high availability (HA) configured using Kubernetes-based HA services*. We're exploring approaches for *geo-redundancy (GR)* to ensure disaster recovery and fault

Re: Flink Operator v1.12 breaks jobs upgrades

2025-06-30 Thread Salva Alcántara
Hey Gyula! I created https://issues.apache.org/jira/browse/FLINK-38033 I will add logs / provide more details when I have time... On Mon, Jun 30, 2025 at 6:18 PM Gyula Fóra wrote: > Hey! > > Could you please open a JIRA ticket with the operator logs so we can > investigate? > &

Re: Flink Operator v1.12 breaks jobs upgrades

2025-06-30 Thread Salva Alcántara
{ "emoji": "👍", "version": 1 }

Re: Flink Operator v1.12 breaks jobs upgrades

2025-06-30 Thread Gyula Fóra
rg/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq > > Regards, > > Salva > > On 2025/06/27 15:21:47 Salva Alcántara wrote: > > I was running Flink Kubernetes Operator v1.11 with a minor tweak, see > more > > here: > > - > > > https://www.project-syndicate.org/comm

RE: Flink Operator v1.12 breaks jobs upgrades

2025-06-27 Thread Salva Alcántara
Sorry, I meant this link (for the operator tweak which I indeed applied in v1.10, not v.11): - https://lists.apache.org/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq Regards, Salva On 2025/06/27 15:21:47 Salva Alcántara wrote: > I was running Flink Kubernetes Operator v1.11 with a minor tweak,

Flink Operator v1.12 breaks jobs upgrades

2025-06-27 Thread Salva Alcántara
I was running Flink Kubernetes Operator v1.11 with a minor tweak, see more here: - https://www.project-syndicate.org/commentary/ai-productivity-boom-forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05 and everything was fine. However, after upgrading to v1.12, the operator is

RE: Flink K8 Operator: New Snapshot CRD Not Working

2025-06-26 Thread Salva Alcántara
After upgrading the Flink Kubernetes Operator from v1.11 to v1.12 upgrades started to fail in all my jobs with the following error message: ``` Error during event processing ExecutionScope{ resource id: ResourceID{name='my-job-checkpoint-periodic-1741010907590', namespace='plat

Errors on adding job to flink run

2025-06-25 Thread George
hi all ok, so I'm using the follow files in my parent pom file. The flink source connectors intention is to create snmp agents/targets. user flink sql to define a table, that will be scraped. at a interval specified. 1.20.1 17 3.11.0 3.5.2 ${java.version} ${java.version} UTF-8 1.7.36 2

Re: Flink SQL Plain avro schema deserialization

2025-06-20 Thread Yarden BenMoshe
Hi all, In case it helps someone in the future, I wanted to share that I was finally able to identify the problem and create a fix that works for me. As mentioned in earlier messages, the default Avro behavior in Flink generates a schema based on the table definition. This includes some

Interest Check: Flink-Pinot Connector with SQL & Upsert Support

2025-06-18 Thread Poorvank Bhatia
Following up on a proposal I shared in the dev thread <https://lists.apache.org/thread/sov9vg1jl0tnv4d1857sk69s96dcnwq1> — we’re working on externalizing and modernizing the Flink-Pinot connector (currently in Apache Bahir, which is now in the Attic). The plan includes: - Movi

Re: Apache Flink / Custom Connectors

2025-06-16 Thread George Leonard
Hi there This was exactly the problem. Dian figured out the reason: I needed to rename file org.apache.flink.table.factories.DynamicTableFactory to org.apache.flink.table.factories.Factory And then needed to added the format tag for the flink sql. Blog will drop in 1 hour on medium and

Re:Apache Flink / Custom Connectors

2025-06-16 Thread Xuyang
connector. -- Best! Xuyang At 2025-06-16 00:08:14, "George" wrote: Hi all Hope someone can help. 1. want to confirm personally written connectors can be placed in $FLINK_HOME/lib or a subdirectory. i got my prometheus connector in ...HOME/lib/flink/ I have other

Apache Flink / Custom Connectors

2025-06-15 Thread George
Hi all Hope someone can help. 1. want to confirm personally written connectors can be placed in $FLINK_HOME/lib or a subdirectory. i got my prometheus connector in ...HOME/lib/flink/ I have other connectors/jars in HOME/lib/fluss and HOME/lib/hive i can see the relevant jar loaded in the

Re: Problems running Flink job via K8s operator

2025-06-11 Thread Nikola Milutinovic
Hi Gunnar. Answer 1. I am not sure which image we should be talking about – operator or Flink sessions cluster, since I do not know who is interpreting the job instructions. Anyway, we are using an image we derive from Flink official image: flink:1.20.1-java17 Answer 2. We are using 1.12

Problems running Flink job via K8s operator

2025-06-11 Thread Nikola Milutinovic
Hello. I have problems trying to run a Flink session job using Flink Kubernetes operator. Two problems, so far. This is the Spec I am trying to use: apiVersion: flink.apache.org/v1beta1 kind: FlinkSessionJob metadata: name: nix-test spec: deploymentName: flink-cluster-session-cluster job

Re: Flink SQL Plain avro schema deserialization

2025-06-10 Thread Yarden BenMoshe
n ‫בתאריך יום א׳, 8 ביוני 2025 ב-11:18 מאת ‪Yarden BenMoshe‬‏ <‪ yarde...@gmail.com‬‏>:‬ > Hi all, > I am trying to create a Flink SQL pipeline that will consume from a kafka > topic that contains plain avro objects (no schema registry used). > As I can see in the docs, for plain avro

Flink SQL Plain avro schema deserialization

2025-06-08 Thread Yarden BenMoshe
Hi all, I am trying to create a Flink SQL pipeline that will consume from a kafka topic that contains plain avro objects (no schema registry used). As I can see in the docs, for plain avro, the schema (in flink sql context) will be inferred from the table definition. My problem is that the schema

Re: Kubernetes Operator Flink version null for FlinkSessionsJob

2025-06-06 Thread Nikola Milutinovic
What does your YAML for Job submission look like? And the YAML for Session cluster, for that matter. It is hard to tell without those. Nix, From: dominik.buen...@swisscom.com Date: Thursday, June 5, 2025 at 3:40 PM To: user@flink.apache.org Subject: Kubernetes Operator Flink version null for

Re: Flink CDC for mongoDB, need to understand MongoDBStreamFetchTask better

2025-06-05 Thread Sachin Mittal
Hi, I have raised a JIRA for this: https://issues.apache.org/jira/browse/FLINK-37909 And also a PR which I feel should fix this issue: https://github.com/apache/flink-cdc/pull/4039 Can someone from the Flink community take an initiative for this and get this fixed. Thanks Sachin On Thu, Jun 5

Re: Interplay between K8s VPA/HPA & Flink Operator (Auto-Scaling/Tuning)

2025-06-05 Thread Salva Alcántara
{ "emoji": "👍", "version": 1 }

Kubernetes Operator Flink version null for FlinkSessionsJob

2025-06-05 Thread Dominik.Buenzli
Hello all, I currently have an issue when deploying FlinkSessionJob deployments with the Kubernetes operator (1.11.0) for Flink 1.20.1. After successfully starting the session cluster, I receive the following error message when submitting the FlinkSessionJob: Event[Job] | Warning

Re: Interplay between K8s VPA/HPA & Flink Operator (Auto-Scaling/Tuning)

2025-06-05 Thread Gyula Fóra
Flink Autoscaling works based on processing capacity not directly on cpu. You cannot enable VPA/HPA together with the Flink Autoscaler. Gyula On Thu, Jun 5, 2025 at 2:27 PM Salva Alcántara wrote: > From other threads like this: > > https://lists.apache.o

Re: Interplay between K8s VPA/HPA & Flink Operator (Auto-Scaling/Tuning)

2025-06-05 Thread Salva Alcántara
>From other threads like this: https://lists.apache.org/thread/zhfk8p4l46v3n367wwh2o2jmgfz6y2xb It seems that one should favour Flink Auto-Scaling/Tuning built-in solutions over the more generic K8s HPA/VPA ones. It might still make sense to enable VPA for CPU Autotuning since Flink Autotun

Flink CDC for mongoDB, need to understand MongoDBStreamFetchTask better

2025-06-04 Thread Sachin Mittal
Hi, I seem to have some difficulty in understanding the code for: https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mongodb-cdc/src/main/java/org/apache/flink/cdc/connectors/mongodb/source/reader/fetch/MongoDBStreamFetchTask.java#L95

[ANNOUNCE] Apache Flink Kubernetes Operator 1.12.0 released

2025-06-04 Thread Gabor Somogyi
The Apache Flink community is very happy to announce the release of Apache Flink Kubernetes Operator 1.12.0. The Flink Kubernetes Operator allows users to manage their Apache Flink applications and their lifecycle through native k8s tooling like kubectl. Please check out the release blog post

Re: Help - Unable to read Key from Kafka messages in Flink SQL Client

2025-05-28 Thread Gunnar Morling
Hey Siva, Can you try adding the following to your table's configuration: 'value.fields-include' = 'EXCEPT_KEY' Best, --Gunnar On Wed, 28 May 2025 at 20:52, Siva Ram Gujju wrote: > Hello, > > I am new to Flink. Running into an issue to read Key from Ka

Help - Unable to read Key from Kafka messages in Flink SQL Client

2025-05-28 Thread Siva Ram Gujju
Hello, I am new to Flink. Running into an issue to read Key from Kafka messages in Flink SQL Client. I have an Order Kafka topic with below messages: Key: {"orderNumber":"1234"} Value: {"orderDate":"20250528","productId":"Product123"}

Re: Working with Flink

2025-05-26 Thread Bryan Cantos
Hello Pedroh! I am adding the jar under /opt/flink/plugins/s3-fs-hadoop inside a docker image. It's definitely not happening under the task manager and I don't believe it's happening under Job manager either. The error is coming from FlinkSessionJob under the Kubernetes&#

Re: Working with Flink

2025-05-26 Thread Pedro Mázala
Hello there Bryan! It looks like Flink cannot find the s3 schema in your packages. How are you adding the jars? Is the error happening on TM or on JM? Att, Pedro Mázala Be awesome On Thu, 22 May 2025 at 19:45, Bryan Cantos wrote: > Hello, > > I have deployed the Flink Operator

Looking for darenwkt from Flink apache/flink-connector-prometheus

2025-05-25 Thread George
Hi there Sorry, I can't figure out any other way to do this. Looking to make contact with Daren working on the Prometheus sink connector. G -- You have the obligation to inform one honestly of the risk, and as a person you are committed to educate yourself to the total risk in any activity! O

Flink CDC 3.1 for mongodb repeatedly failing in streaming mode after running for few days despite setting heartbeat

2025-05-23 Thread Sachin Mittal
Hi, So I have a data stream applications which pulls data from MongoDB using CDC, and after the process runs for few days it fails with following stacktrace: com.mongodb.MongoCommandException: Command failed with error 286 (ChangeStreamHistoryLost): 'PlanExecutor error during aggregation :: caused

Working with Flink

2025-05-22 Thread Bryan Cantos
Hello, I have deployed the Flink Operator via helm chart ( https://github.com/apache/flink-kubernetes-operator) in our kubernetes cluster. I have a use case where we want to run ephemeral jobs so I created a FlinkDeployment and am trying to submit a job via FlinkSessionJob. I have sent example

Flink 2.0 savepoint compatibility

2025-05-22 Thread Praveen Chandna via user
Hello Savepoint created using the Flink 1.20.1 release are guaranteed to be compatible with Flink 2.0 ? Below page is from the Flink 2.0 release and it doesn't contains the compatibility matrix for Flink 2.0. https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/ops/upgr

Flink task manager pod auto scaler

2025-05-21 Thread Kamal Mittal via user
Hello, Is it possible to have dependency of auto scaler module only for having scaling metrices and do task manager pod autoscaling based on those by using HPA or KEDA rather than using complete Flink Kubernetes operator? Asking this if any way to introduce Flink Kubernetes operator in phased

Re: Flink metrices for pending records

2025-05-21 Thread Andreas Bube via user
t is shown? > > > > Similarly for sink like ‘0.Sink__Print_to_Std__Out.numRecordsOut’ is > always shown as ‘0’ and only in records count is shown? > > > > Rgds, > > Kamal > > > > *From:* Andreas Bube > *Sent:* 21 May 2025 11:41 > *To:* Kamal Mittal > *Subject:*

RE: Flink metrices for pending records

2025-05-21 Thread Kamal Mittal via user
#x27; and only in records count is shown? Rgds, Kamal From: Andreas Bube Sent: 21 May 2025 11:41 To: Kamal Mittal Subject: Re: Flink metrices for pending records You don't often get email from ab...@toogoodtogo.com<mailto:ab...@toogoodtogo.com>. Learn why this is important<h

Flink metrices for pending records

2025-05-20 Thread Kamal Mittal via user
Hello, Can you please help to know if any metrices for "pending records" at source level is exposed by flink - 1.20? At below link there is nothing like that. Metrics | Apache Flink<https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/ops/metrics/> Rgds, Kamal

Re: Flink job and task manager pods auto scaling

2025-05-20 Thread Kamal Mittal via user
Plz give input for below. Get Outlook for Android<https://aka.ms/AAb9ysg> From: Kamal Mittal via user Sent: Tuesday, May 20, 2025 8:31:20 AM To: User Subject: Flink job and task manager pods auto scaling Hello, Couple of questions below for Flin

Re: Flink CDC is failing for MongoDB with Resume of change stream was not possible, as the resume point may no longer be in the oplog error

2025-05-20 Thread Sachin Mittal
may have exceeded the MongoDB > server oplog's TTL. > > For the CDC client side, there’s a “heartbeat.interval.ms” [1] option to > send heartbeat requests to MongoDB server regularly and refreshes resume > token position. It is suggested to set it to a reasonable interval if > c

Flink job and task manager pods auto scaling

2025-05-19 Thread Kamal Mittal via user
Hello, Couple of questions below for Flink 1.20, please give input. 1. While using "Adaptive scheduler" for streaming job auto scaling, will the complete job re-start due to new parallelism? Any special situation? 1. While using "Adaptive scheduler" for streaming

Re: Apache Flink Serialization Question

2025-05-19 Thread Zhanghao Chen
It would still work 发件人: Richard Cheung 发送时间: 星期二, 五月 20, 2025 4:08:00 上午 收件人: Zhanghao Chen 抄送: Мосин Николай ; Schwalbe Matthias ; user@flink.apache.org 主题: Re: Apache Flink Serialization Question Hi all, Thanks again for the help! I have one more follow

Re: Apache Flink Serialization Question

2025-05-19 Thread Richard Cheung
Hi all, Thanks again for the help! I have one more follow up question regarding Flink and serialization on v1.18. I know state schema evolution is supported for POJOs in Flink. However, if my class uses the POJO serializer but has a field that falls back to Kryo (such as UUID), would it still be

Re: Flink task manager pod auto scaling

2025-05-19 Thread Zhanghao Chen
For CPU scaling, you can do it by kill-and-restart or K8s VPA (beta in recent versions), and the algorithm should be straightforward. For MEM scaling, it is a bit challenging due to the complex memory model of Flink and the complexity of JVM itself. Flink K8s Operator provides AutoTuning for

Flink job autoscaling metrices

2025-05-18 Thread Kamal Mittal via user
Hello, Does flink operator/task metrices reset if job is auto scaled? Rgds, Kamal

Flink task manager pod auto scaling

2025-05-18 Thread Kamal Mittal via user
Hello, Does flink supports vertical task manager pod auto scaling? Rgds, Kamal

RE: Flink task manager PODs autoscaling - K8s installation

2025-05-18 Thread Kamal Mittal via user
Please give input for below. Why K8s HPA doesn't work well with Flink? Any limitations? Also instead of HPA, Kubernetes Event Driven auto scaler (KEDA) can be used? From: Kamal Mittal Sent: 17 May 2025 14:23 To: Kamal Mittal ; Zhanghao Chen ; user@flink.apache.org Subject: RE: Flink

Re: [ANNOUNCE] Apache Flink CDC 3.4.0 released

2025-05-18 Thread Leonard Xu
Thanks @Yanquan for the release management work and all involved! Best, Leonard > 2025 5月 16 22:40,Yanquan Lv 写道: > > The Apache Flink community is very happy to announce the release of > Apache Flink CDC 3.4.0. > > Apache Flink CDC is a distributed data integration tool fo

RE: Flink task manager PODs autoscaling - K8s installation

2025-05-17 Thread Kamal Mittal via user
Please give input for below. From: Kamal Mittal via user Sent: 16 May 2025 06:42 To: Zhanghao Chen ; user@flink.apache.org Subject: RE: Flink task manager PODs autoscaling - K8s installation Thanks for describing. Just to know that why K8s HPA doesn't work well with Flink? Any limita

[ANNOUNCE] Apache Flink CDC 3.4.0 released

2025-05-16 Thread Yanquan Lv
The Apache Flink community is very happy to announce the release of Apache Flink CDC 3.4.0. Apache Flink CDC is a distributed data integration tool for real time data and batch data, bringing the simplicity and elegance of data integration via YAML to describe the data movement and transformation

Flink History Server Logs

2025-05-16 Thread SURAJ KADEL
Hi Team, How can we configure Flink History Server to retrieve the logs from jobManager and taskManagers? Currently, all of our flink logs are getting stored in ElasticSearch but we want to observe these logs from History Server as well. Any sort of suggestions would be very helpful. Thanks

Re: Apache Flink Serialization Question

2025-05-16 Thread Zhanghao Chen
Flink 2.0 will work. You may use Types.LIST for lists and Types.MAP for sets (mocked by a Map) for that. Notice that Flink's built-in LIST does not support null element and MAP type does not support null key, and neither support null collection. In Flink 2.0, we've added special tr

RE: Flink task manager PODs autoscaling - K8s installation

2025-05-15 Thread Kamal Mittal via user
Thanks for describing. Just to know that why K8s HPA doesn't work well with Flink? Any limitations? Also instead of HPA, Kubernetes Event Driven auto scaler (KEDA) can be used? From: Zhanghao Chen Sent: 14 May 2025 06:47 To: user@flink.apache.org; Kamal Mittal Subject: Re: Flink task ma

Re: Apache Flink Serialization Question

2025-05-15 Thread Мосин Николай
List tags = new ArrayList<>(); But for Set I don't found workaround and as I understand it must be replaced by ListКому: Mosin Nick (mosin...@yandex.ru);Копия: Schwalbe Matthias (matthias.schwa...@viseca.ch), Zhanghao Chen (zhanghao.c...@outlook.com), use

Graceful stopping of Flink on K8s

2025-05-15 Thread Nikola Milutinovic
Hello all. We are running Flink 1.20 on Kubernetes cluster. We deploy using Flink K8s Operator. I was wandering, when Kubernets decides to kill a running Flink cluster, is it using some regular graceful method or does it just kill the pod? Just for the reference, Docker has a way to specify a

Re: Apache Flink Serialization Question

2025-05-15 Thread Richard Cheung
the future. Is there a workaround for this for POJO compliance in Flink v1.8 or would I have to upgrade to Flink v2 which supports common collection types for serialization or maybe the even upgrading to v2 won’t work? Best regards, Richard On Thu, May 15, 2025 at 9:06 AM Mosin Nick wrote

Re: Apache Flink Serialization Question

2025-05-15 Thread Mosin Nick
@flink.apache.org (user@flink.apache.org);Тема: Apache Flink Serialization Question;15.05.2025, 15:56, "Schwalbe Matthias" :Hi Richard, Same problem, 12 Flink versions later, I created my own TypeInformation/Serializer/Snapshot for UUID (Scala in that case), along: class UUIDTypeInformati

RE: Apache Flink Serialization Question

2025-05-15 Thread Schwalbe Matthias
Hi Richard, Same problem, 12 Flink versions later, I created my own TypeInformation/Serializer/Snapshot for UUID (Scala in that case), along: class UUIDTypeInformation extends TypeInformation[UUID] … class UUIDSerializer extends TupleSerializerBase[UUID]( … class UUIDSerializerSnapshot

Re: Keyed watermarks: A fine-grained watermark generation for Apache Flink

2025-05-15 Thread Мосин Николай
orkarounds, but all my attempts almost failed due to the lack of timers that would be relay on WM and which I do not use now. Кому: Zhanghao Chen (zhanghao.c...@outlook.com);Копия: user@flink.apache.org;Тема: Keyed watermarks: A fine-grained watermark generation for Apache Flink;15.05.2

Re: Keyed watermarks: A fine-grained watermark generation for Apache Flink

2025-05-15 Thread Zhanghao Chen
Flink Hi I have talked with the community about this for many years last time at Flink forward 2024 in Berlin. The use case are simple. If you receive data from IoT devices over the gsm network. The clock on all the devices aren’t synchronised the IoT devices can buffer data to reduce the cost

Re: Keyed watermarks: A fine-grained watermark generation for Apache Flink

2025-05-14 Thread Lasse Nedergaard
Hi I have talked with the community about this for many years last time at Flink forward 2024 in Berlin. The use case are simple. If you receive data from IoT devices over the gsm network. The clock on all the devices aren’t synchronised the IoT devices can buffer data to reduce the cost

Re: Keyed watermarks: A fine-grained watermark generation for Apache Flink

2025-05-14 Thread Zhanghao Chen
Thanks for sharing! It is an interesting idea. The generalized watermark [1] introduced in DataStreamV2 might be sufficient to implement it. It'll be great if you could share more contexts on why this is useful in your pipelines. [1] https://cwiki.apache.org/confluence/display/FLINK/FLI

Keyed watermarks: A fine-grained watermark generation for Apache Flink

2025-05-14 Thread Мосин Николай
I found paper https://scholar.google.com/scholar?q=10.1016/j.future.2025.107796 where described Keyed Watermarks that is what I need in my pipelines. Does anyone know is it planned to implement Keyed Watermarks in Flink and when?

Re: Python based User defined function on Flink 1.19.1

2025-05-14 Thread George
ted for this package to have been > installed already as you can't really do allot onthe flink nodes with > python without this package. > > G > > On Wed, May 14, 2025 at 3:33 PM Nikola Milutinovic < > n.milutino...@levi9.com> wrote: > >> Hmm, lemme see

Re: Python based User defined function on Flink 1.19.1

2025-05-14 Thread George
Thanks. I would think this should rather be done up stream in the source image, for that matter I would have expected for this package to have been installed already as you can't really do allot onthe flink nodes with python without this package. G On Wed, May 14, 2025 at 3:33 PM N

Re: Python based User defined function on Flink 1.19.1

2025-05-14 Thread Nikola Milutinovic
Flink 1.19.1 Hi there Got it build :) I installed python3-pip in addition to the java - headless version, then installed the package globally and then did the clean up. I am however getting the below now. it seems to be looking for python from the flink side and not python3 ``` flink@jobmanager

Re: Python based User defined function on Flink 1.19.1

2025-05-14 Thread George
Hi there Got it build :) I installed python3-pip in addition to the java - headless version, then installed the package globally and then did the clean up. I am however getting the below now. it seems to be looking for python from the flink side and not python3 ``` flink@jobmanager:/sql

Re: Python based User defined function on Flink 1.19.1

2025-05-14 Thread George
ve a look. will advise when done. G On Wed, May 14, 2025 at 11:18 AM Nikola Milutinovic wrote: > Hi George. > > > > We saw the same problem, running Apache Flink 1.19 and 1.20 images. The > cause is that Flink image provides a JRE and you need JDK to build/install > PyFlin

Re: Python based User defined function on Flink 1.19.1

2025-05-14 Thread Nikola Milutinovic
Hi George. We saw the same problem, running Apache Flink 1.19 and 1.20 images. The cause is that Flink image provides a JRE and you need JDK to build/install PyFlink. And, oddly enough, I think it was only on ARM64 images. Amd64 was OK, I think. So, Mac M1, M2, M3… Our Docker file for

  1   2   3   4   5   6   7   8   9   10   >