; >>
> >> Leonard Xu 于2025年7月31日周四 15:36写道:
> >>
> >> > Nice! Thanks Ron and all involved for the great work.
> >> >
> >> > Best,
> >> > Leonard
> >> >
> >> > > 2025 7月 31 14:30,Ron Liu 写道:
> >> >
Hi folks,
Wanted to invite you all to a public webinar we are holding next week -
https://www.linkedin.com/events/7357004263773872128/
In this webinar, we’ll share practical insights and proven strategies for
monitoring Flink clusters, identifying bottlenecks, and optimizing resource
usage. We
ulations! Thanks for driving this!
>>
>> Best,
>> Lincoln Lee
>>
>>
>> Leonard Xu 于2025年7月31日周四 15:36写道:
>>
>> > Nice! Thanks Ron and all involved for the great work.
>> >
>> > Best,
>> > Leonard
>> >
>> &
ll involved for the great work.
> >
> > Best,
> > Leonard
> >
> > > 2025 7月 31 14:30,Ron Liu 写道:
> > >
> > > The Apache Flink community is very happy to announce the release of
> > Apache
> > > Flink 2.1.0, which is the first rel
The Apache Flink community is very happy to announce the release of Apache
Flink 2.1.0, which is the first release for the Apache Flink 2.1 series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
seems there be FLINK-33326
is there people handling it now?
if not, I would like to have a try...
Xeno Amess 于2025年7月30日周三 18:49写道:
> Hi.
> I see the `fromCollection` impl, and find out it actually use
> `FromIteratorFunction`, which be a `SourceFunction`, witch be
> deprecated&a
Hi.
I see the `fromCollection` impl, and find out it actually use
`FromIteratorFunction`, which be a `SourceFunction`, witch be
deprecated&internal.
I wonder if it be valuable to create a class like this, but using the new
`Source` interface.
hi,
Which verison do you use? Could you share the whole exception stack?
Best,
Shengkai
李 琳 于2025年7月8日周二 23:06写道:
>
> Hi all,
>
> I used the following code to create an applicationConfiguration for
> submitting a Flink job to a YARN cluster in application mode:
>
> *App
ion of 2.2.
https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details
about the 2.1 release.
On 23 Jul 2025, at 12:42, Xeno Amess wrote:
(though I know we don't really follow semver at backward compa, at least we can
try the 3-parts version number...like 2.1.0...or
k on several versions at the same time. You
>> were looking at the 2.1 notes in a snapshot version of 2.2.
>>
>> https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides
>> details about the 2.1 release.
>>
>>
>>
>> On 23 Jul 2025, at 12:4
i.apache.org/confluence/display/FLINK/2.1+Release provides
> details about the 2.1 release.
>
>
>
> On 23 Jul 2025, at 12:42, Xeno Amess wrote:
>
> (though I know we don't really follow semver at backward compa, at least
> we can try the 3-parts version number...like 2.1.0...or
Hello all.
Just wanted to give a heads-up that Flink 1.20 images based on Java 17 have
problems with launching jobs via Flink Kubernetes Operator. Those based on Java
11 work.
We are running a Flink Session cluster on Kubernetes, deploying it using Flink
K8s Operator. Our session cluster was
The project can of course work on several versions at the same time. You were
looking at the 2.1 notes in a snapshot version of 2.2.
https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details
about the 2.1 release.
> On 23 Jul 2025, at 12:42, Xeno Amess wr
Flink 1.20.2 tgz package with oracle-cdc 3.4 and ojdbc17.jar downloaded
from official website.
Here are my steps:
-- Step 1: Setup XE to enable archive logs
alter system set db_recovery_file_dest_size = 10G;
alter system set db_recovery_file_dest =
'/opt/oracle/oradata/recovery_area
Hey,
I'm trying to run Flink K8S Operator with version 1.12. The Flink version
I'm trying to use is 1.17.1, generally the deployment looks & work
correctly, however if at any point the job gets restarted because of the
exception, the operator starts can't correctly deserial
Hi Flink community,
We've encountered a compatibility issue with the Flink Kubernetes operator
on Kubernetes 1.33 clusters. The operator fails reconciliation with the
following error:
Unrecognized field "emulationMajor" (class
io.fabric8.kubernetes.client.VersionInfo),
not marked
Hi patricia,
```setPredefinedOptions ``` can still be found in EmbeddedRocksDBStateBackend
in Flink 2.0.
You can set the option programmatically via config:
config.set(
RocksDBOptions.PREDEFINED_OPTIONS,
PredefinedOptions.FLASH_SSD_OPTIMIZED.name());
Best,
Han Yin
> 2025年7月15日 13
Hi,
We are migrating our Flink 1.20 to Flink 2.0.
We have set the code in our project like this:
RockDBStateBackend rockDb = new RockDBStateBackend(config.getPath() +
"/myrockdbpath");
rockdb.setPredefinedOptions(PredefinedOptions.FLASH_SSD_OPTIMIZED)
We followed the new way in Fl
Congratulations to everyone involved!
Thank you Ferenc for managing the release!
On Fri, Jul 11, 2025 at 1:15 PM Ferenc Csaky wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.20.2, which is the second bugfix release for the Apache F
Congratulations to everyone involved!
Thank you Ferenc for managing the release!
On Fri, Jul 11, 2025 at 1:13 PM Ferenc Csaky wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.19.3, which is the third bugfix release for the Apache F
The Apache Flink community is very happy to announce the release of Apache
Flink 1.20.2, which is the second bugfix release for the Apache Flink 1.20
series.
Apache Flink® is an open-source stream processing framework for distributed,
high-performing, always-available, and accurate data streaming
The Apache Flink community is very happy to announce the release of Apache
Flink 1.19.3, which is the third bugfix release for the Apache Flink 1.19
series.
Apache Flink® is an open-source stream processing framework for distributed,
high-performing, always-available, and accurate data streaming
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.12.1.
The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.
The release is available for download at
Yes, it's possible. You may refer to the example here:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/types_serialization/#most-frequent-issues
Best,
Zhanghao Chen
From: patricia lee
Sent: Thursday, Ju
ew config
> pipeline.serialization-config [1]. I've created a new JIRA issue [2] to fix
> the doc.
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config
> [2] https://issues.apache.org/jira/browse/FLINK
Hi Patricia,
You may register the type using the new config pipeline.serialization-config
[1]. I've created a new JIRA issue [2] to fix the doc.
[1]
https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config
[2] https://issues.apach
Hi,
We are currently migrating from Flink version 1.18 to Flink version 2.0.
We have this configuration:
StreamExecutionEnvironment env = new StreamExecutionEnvironment();
env.setRegisterTypes(MyClass.class);
In flink 2.0, if our understanding is correct, we'll use this registerPoj
Hi,
We are currently migrating our flink projects from version 1.18 to 2.0.
We have this part of the codes that we set the model in the
StreamExecutionEnvironment env = new StreamExecutionEnvironment();
env.setRegisterTypes(MyClass.class);
Right now in Flink 2.0, I followed this
Hi all,
I used the following code to create an applicationConfiguration for submitting
a Flink job to a YARN cluster in application mode:
ApplicationConfiguration applicationConfiguration =
new ApplicationConfiguration(userProgramArguments,
userMainClass);
However, I
t;4.0.0-2.0"
lazy val flink_simple_testing = project
.in(file("flink_simple_testing"))
.settings(
name := "flink-testing",
libraryDependencies ++= Seq(
"org.apache.flink" % "flink-core" % flinkVersion,
"org.apache.flink" % &
ere:
https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/dev/datastream/overview/
Updated slightly to this:
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStre
Hi
I just wanted to know if we have to do some special configurations at the
infrastructure provider to scale up the job. For example if I add more
nodes in my node pool of my k8s cluster, the task managers are not
automatically scaled up.
Now I have also run Flink on AWS EMR where it uses Yarn
Dear Apache Flink Community,
I hope you're doing well.
We are currently operating a Flink deployment on *Kubernetes*, with *high
availability (HA) configured using Kubernetes-based HA services*. We're
exploring approaches for *geo-redundancy (GR)* to ensure disaster recovery
and fault
Hey Gyula! I created
https://issues.apache.org/jira/browse/FLINK-38033
I will add logs / provide more details when I have time...
On Mon, Jun 30, 2025 at 6:18 PM Gyula Fóra wrote:
> Hey!
>
> Could you please open a JIRA ticket with the operator logs so we can
> investigate?
>
&
{
"emoji": "👍",
"version": 1
}
rg/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq
>
> Regards,
>
> Salva
>
> On 2025/06/27 15:21:47 Salva Alcántara wrote:
> > I was running Flink Kubernetes Operator v1.11 with a minor tweak, see
> more
> > here:
> > -
> >
> https://www.project-syndicate.org/comm
Sorry, I meant this link (for the operator tweak which I indeed applied in
v1.10, not v.11):
- https://lists.apache.org/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq
Regards,
Salva
On 2025/06/27 15:21:47 Salva Alcántara wrote:
> I was running Flink Kubernetes Operator v1.11 with a minor tweak,
I was running Flink Kubernetes Operator v1.11 with a minor tweak, see more
here:
-
https://www.project-syndicate.org/commentary/ai-productivity-boom-forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05
and everything was fine. However, after upgrading to v1.12, the operator is
After upgrading the Flink Kubernetes Operator from v1.11 to v1.12 upgrades
started to fail in all my jobs with the following error message:
```
Error during event processing ExecutionScope{ resource id:
ResourceID{name='my-job-checkpoint-periodic-1741010907590',
namespace='plat
hi all
ok, so I'm using the follow files in my parent pom file.
The flink source connectors intention is to create snmp agents/targets.
user flink sql to define a table,
that will be scraped. at a interval specified.
1.20.1
17
3.11.0
3.5.2
${java.version}
${java.version}
UTF-8
1.7.36
2
Hi all,
In case it helps someone in the future, I wanted to share that I was
finally able to identify the problem and create a fix that works for me.
As mentioned in earlier messages, the default Avro behavior in Flink
generates a schema based on the table definition. This includes some
Following up on a proposal I shared in the dev thread
<https://lists.apache.org/thread/sov9vg1jl0tnv4d1857sk69s96dcnwq1> — we’re
working on externalizing and modernizing the Flink-Pinot connector
(currently in Apache Bahir, which is now in the Attic). The plan includes:
-
Movi
Hi there
This was exactly the problem.
Dian figured out the reason: I needed to rename file
org.apache.flink.table.factories.DynamicTableFactory to
org.apache.flink.table.factories.Factory
And then needed to added the format tag for the flink sql.
Blog will drop in 1 hour on medium and
connector.
--
Best!
Xuyang
At 2025-06-16 00:08:14, "George" wrote:
Hi all
Hope someone can help.
1. want to confirm personally written connectors can be placed in
$FLINK_HOME/lib or a subdirectory. i got my prometheus connector in
...HOME/lib/flink/
I have other
Hi all
Hope someone can help.
1. want to confirm personally written connectors can be placed in
$FLINK_HOME/lib or a subdirectory. i got my prometheus connector in
...HOME/lib/flink/
I have other connectors/jars in HOME/lib/fluss and HOME/lib/hive
i can see the relevant jar loaded in the
Hi Gunnar.
Answer 1.
I am not sure which image we should be talking about – operator or Flink
sessions cluster, since I do not know who is interpreting the job instructions.
Anyway, we are using an image we derive from Flink official image:
flink:1.20.1-java17
Answer 2.
We are using 1.12
Hello.
I have problems trying to run a Flink session job using Flink Kubernetes
operator. Two problems, so far. This is the Spec I am trying to use:
apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
name: nix-test
spec:
deploymentName: flink-cluster-session-cluster
job
n
בתאריך יום א׳, 8 ביוני 2025 ב-11:18 מאת Yarden BenMoshe <
yarde...@gmail.com>:
> Hi all,
> I am trying to create a Flink SQL pipeline that will consume from a kafka
> topic that contains plain avro objects (no schema registry used).
> As I can see in the docs, for plain avro
Hi all,
I am trying to create a Flink SQL pipeline that will consume from a kafka
topic that contains plain avro objects (no schema registry used).
As I can see in the docs, for plain avro, the schema (in flink sql context)
will be inferred from the table definition.
My problem is that the schema
What does your YAML for Job submission look like? And the YAML for Session
cluster, for that matter.
It is hard to tell without those.
Nix,
From: dominik.buen...@swisscom.com
Date: Thursday, June 5, 2025 at 3:40 PM
To: user@flink.apache.org
Subject: Kubernetes Operator Flink version null for
Hi,
I have raised a JIRA for this:
https://issues.apache.org/jira/browse/FLINK-37909
And also a PR which I feel should fix this issue:
https://github.com/apache/flink-cdc/pull/4039
Can someone from the Flink community take an initiative for this and get
this fixed.
Thanks
Sachin
On Thu, Jun 5
{
"emoji": "👍",
"version": 1
}
Hello all,
I currently have an issue when deploying FlinkSessionJob deployments with the
Kubernetes operator (1.11.0) for Flink 1.20.1. After successfully starting the
session cluster, I receive the following error message when submitting the
FlinkSessionJob:
Event[Job] | Warning
Flink Autoscaling works based on processing capacity not directly on cpu.
You cannot enable VPA/HPA together with the Flink Autoscaler.
Gyula
On Thu, Jun 5, 2025 at 2:27 PM Salva Alcántara
wrote:
> From other threads like this:
>
> https://lists.apache.o
>From other threads like this:
https://lists.apache.org/thread/zhfk8p4l46v3n367wwh2o2jmgfz6y2xb
It seems that one should favour Flink Auto-Scaling/Tuning built-in
solutions over the more generic K8s HPA/VPA ones.
It might still make sense to enable VPA for CPU Autotuning since Flink
Autotun
Hi,
I seem to have some difficulty in understanding the code for:
https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mongodb-cdc/src/main/java/org/apache/flink/cdc/connectors/mongodb/source/reader/fetch/MongoDBStreamFetchTask.java#L95
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.12.0.
The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.
Please check out the release blog post
Hey Siva,
Can you try adding the following to your table's configuration:
'value.fields-include' = 'EXCEPT_KEY'
Best,
--Gunnar
On Wed, 28 May 2025 at 20:52, Siva Ram Gujju wrote:
> Hello,
>
> I am new to Flink. Running into an issue to read Key from Ka
Hello,
I am new to Flink. Running into an issue to read Key from Kafka messages in
Flink SQL Client.
I have an Order Kafka topic with below messages:
Key: {"orderNumber":"1234"}
Value: {"orderDate":"20250528","productId":"Product123"}
Hello Pedroh!
I am adding the jar under /opt/flink/plugins/s3-fs-hadoop inside a docker
image. It's definitely not happening under the task manager and I don't
believe it's happening under Job manager either. The error is coming from
FlinkSessionJob under the Kubernetes
Hello there Bryan!
It looks like Flink cannot find the s3 schema in your packages. How are you
adding the jars? Is the error happening on TM or on JM?
Att,
Pedro Mázala
Be awesome
On Thu, 22 May 2025 at 19:45, Bryan Cantos wrote:
> Hello,
>
> I have deployed the Flink Operator
Hi there
Sorry, I can't figure out any other way to do this.
Looking to make contact with Daren working on the Prometheus sink connector.
G
--
You have the obligation to inform one honestly of the risk, and as a person
you are committed to educate yourself to the total risk in any activity!
O
Hi,
So I have a data stream applications which pulls data from MongoDB using
CDC, and after the process runs for few days it fails with following
stacktrace:
com.mongodb.MongoCommandException: Command failed with error 286
(ChangeStreamHistoryLost): 'PlanExecutor error during aggregation :: caused
Hello,
I have deployed the Flink Operator via helm chart (
https://github.com/apache/flink-kubernetes-operator) in our kubernetes
cluster. I have a use case where we want to run ephemeral jobs so I created
a FlinkDeployment and am trying to submit a job via FlinkSessionJob. I have
sent example
Hello
Savepoint created using the Flink 1.20.1 release are guaranteed to be
compatible with Flink 2.0 ?
Below page is from the Flink 2.0 release and it doesn't contains the
compatibility matrix for Flink 2.0.
https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/ops/upgr
Hello,
Is it possible to have dependency of auto scaler module only for having scaling
metrices and do task manager pod autoscaling based on those by using HPA or
KEDA rather than using complete Flink Kubernetes operator?
Asking this if any way to introduce Flink Kubernetes operator in phased
t is shown?
>
>
>
> Similarly for sink like ‘0.Sink__Print_to_Std__Out.numRecordsOut’ is
> always shown as ‘0’ and only in records count is shown?
>
>
>
> Rgds,
>
> Kamal
>
>
>
> *From:* Andreas Bube
> *Sent:* 21 May 2025 11:41
> *To:* Kamal Mittal
> *Subject:*
#x27; and only in records count is shown?
Rgds,
Kamal
From: Andreas Bube
Sent: 21 May 2025 11:41
To: Kamal Mittal
Subject: Re: Flink metrices for pending records
You don't often get email from
ab...@toogoodtogo.com<mailto:ab...@toogoodtogo.com>. Learn why this is
important<h
Hello,
Can you please help to know if any metrices for "pending records" at source
level is exposed by flink - 1.20? At below link there is nothing like that.
Metrics | Apache
Flink<https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/ops/metrics/>
Rgds,
Kamal
Plz give input for below.
Get Outlook for Android<https://aka.ms/AAb9ysg>
From: Kamal Mittal via user
Sent: Tuesday, May 20, 2025 8:31:20 AM
To: User
Subject: Flink job and task manager pods auto scaling
Hello,
Couple of questions below for Flin
may have exceeded the MongoDB
> server oplog's TTL.
>
> For the CDC client side, there’s a “heartbeat.interval.ms” [1] option to
> send heartbeat requests to MongoDB server regularly and refreshes resume
> token position. It is suggested to set it to a reasonable interval if
> c
Hello,
Couple of questions below for Flink 1.20, please give input.
1. While using "Adaptive scheduler" for streaming job auto scaling, will the
complete job re-start due to new parallelism? Any special situation?
1. While using "Adaptive scheduler" for streaming
It would still work
发件人: Richard Cheung
发送时间: 星期二, 五月 20, 2025 4:08:00 上午
收件人: Zhanghao Chen
抄送: Мосин Николай ; Schwalbe Matthias
; user@flink.apache.org
主题: Re: Apache Flink Serialization Question
Hi all,
Thanks again for the help! I have one more follow
Hi all,
Thanks again for the help! I have one more follow up question regarding
Flink and serialization on v1.18. I know state schema evolution is
supported for POJOs in Flink. However, if my class uses the POJO serializer
but has a field that falls back to Kryo (such as UUID), would it still be
For CPU scaling, you can do it by kill-and-restart or K8s VPA (beta in recent
versions), and the algorithm should be straightforward. For MEM scaling, it is
a bit challenging due to the complex memory model of Flink and the complexity
of JVM itself. Flink K8s Operator provides AutoTuning for
Hello,
Does flink operator/task metrices reset if job is auto scaled?
Rgds,
Kamal
Hello,
Does flink supports vertical task manager pod auto scaling?
Rgds,
Kamal
Please give input for below.
Why K8s HPA doesn't work well with Flink? Any limitations?
Also instead of HPA, Kubernetes Event Driven auto scaler (KEDA) can be used?
From: Kamal Mittal
Sent: 17 May 2025 14:23
To: Kamal Mittal ; Zhanghao Chen
; user@flink.apache.org
Subject: RE: Flink
Thanks @Yanquan for the release management work and all involved!
Best,
Leonard
> 2025 5月 16 22:40,Yanquan Lv 写道:
>
> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.4.0.
>
> Apache Flink CDC is a distributed data integration tool fo
Please give input for below.
From: Kamal Mittal via user
Sent: 16 May 2025 06:42
To: Zhanghao Chen ; user@flink.apache.org
Subject: RE: Flink task manager PODs autoscaling - K8s installation
Thanks for describing.
Just to know that why K8s HPA doesn't work well with Flink? Any limita
The Apache Flink community is very happy to announce the release of
Apache Flink CDC 3.4.0.
Apache Flink CDC is a distributed data integration tool for real time
data and batch data, bringing the simplicity and elegance of data
integration via YAML to describe the data movement and transformation
Hi Team,
How can we configure Flink History Server to retrieve the logs from
jobManager and taskManagers?
Currently, all of our flink logs are getting stored in ElasticSearch but we
want to observe these logs from History Server as well.
Any sort of suggestions would be very helpful.
Thanks
Flink 2.0 will work. You may use Types.LIST for lists and Types.MAP for sets
(mocked by a Map) for that. Notice that Flink's built-in LIST does not support
null element and MAP type does not support null key, and neither support null
collection. In Flink 2.0, we've added special tr
Thanks for describing.
Just to know that why K8s HPA doesn't work well with Flink? Any limitations?
Also instead of HPA, Kubernetes Event Driven auto scaler (KEDA) can be used?
From: Zhanghao Chen
Sent: 14 May 2025 06:47
To: user@flink.apache.org; Kamal Mittal
Subject: Re: Flink task ma
List tags = new ArrayList<>(); But for Set I don't found workaround and as I understand it must be replaced by ListКому: Mosin Nick (mosin...@yandex.ru);Копия: Schwalbe Matthias (matthias.schwa...@viseca.ch), Zhanghao Chen (zhanghao.c...@outlook.com), use
Hello all.
We are running Flink 1.20 on Kubernetes cluster. We deploy using Flink K8s
Operator.
I was wandering, when Kubernets decides to kill a running Flink cluster, is it
using some regular graceful method or does it just kill the pod?
Just for the reference, Docker has a way to specify a
the future. Is
there a workaround for this for POJO compliance in Flink v1.8 or would I
have to upgrade to Flink v2 which supports common collection types for
serialization or maybe the even upgrading to v2 won’t work?
Best regards,
Richard
On Thu, May 15, 2025 at 9:06 AM Mosin Nick wrote
@flink.apache.org (user@flink.apache.org);Тема: Apache Flink Serialization Question;15.05.2025, 15:56, "Schwalbe Matthias" :Hi Richard, Same problem, 12 Flink versions later, I created my own TypeInformation/Serializer/Snapshot for UUID (Scala in that case), along: class UUIDTypeInformati
Hi Richard,
Same problem, 12 Flink versions later,
I created my own TypeInformation/Serializer/Snapshot for UUID (Scala in that
case), along:
class UUIDTypeInformation extends TypeInformation[UUID]
…
class UUIDSerializer extends TupleSerializerBase[UUID](
…
class UUIDSerializerSnapshot
orkarounds, but all my attempts almost failed due to the lack of timers that would be relay on WM and which I do not use now. Кому: Zhanghao Chen (zhanghao.c...@outlook.com);Копия: user@flink.apache.org;Тема: Keyed watermarks: A fine-grained watermark generation for Apache Flink;15.05.2
Flink
Hi
I have talked with the community about this for many years last time at Flink
forward 2024 in Berlin.
The use case are simple. If you receive data from IoT devices over the gsm
network. The clock on all the devices aren’t synchronised the IoT devices can
buffer data to reduce the cost
Hi
I have talked with the community about this for many years last time at Flink
forward 2024 in Berlin.
The use case are simple. If you receive data from IoT devices over the gsm
network. The clock on all the devices aren’t synchronised the IoT devices can
buffer data to reduce the cost
Thanks for sharing! It is an interesting idea. The generalized watermark [1]
introduced in DataStreamV2 might be sufficient to implement it. It'll be great
if you could share more contexts on why this is useful in your pipelines.
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLI
I found paper https://scholar.google.com/scholar?q=10.1016/j.future.2025.107796 where described Keyed Watermarks that is what I need in my pipelines. Does anyone know is it planned to implement Keyed Watermarks in Flink and when?
ted for this package to have been
> installed already as you can't really do allot onthe flink nodes with
> python without this package.
>
> G
>
> On Wed, May 14, 2025 at 3:33 PM Nikola Milutinovic <
> n.milutino...@levi9.com> wrote:
>
>> Hmm, lemme see
Thanks.
I would think this should rather be done up stream in the source image, for
that matter I would have expected for this package to have been installed
already as you can't really do allot onthe flink nodes with python without
this package.
G
On Wed, May 14, 2025 at 3:33 PM N
Flink 1.19.1
Hi there
Got it build :)
I installed python3-pip in addition to the java - headless version, then
installed the package globally and then did the clean up.
I am however getting the below now.
it seems to be looking for python from the flink side and not python3
```
flink@jobmanager
Hi there
Got it build :)
I installed python3-pip in addition to the java - headless version, then
installed the package globally and then did the clean up.
I am however getting the below now.
it seems to be looking for python from the flink side and not python3
```
flink@jobmanager:/sql
ve a look. will advise when done.
G
On Wed, May 14, 2025 at 11:18 AM Nikola Milutinovic
wrote:
> Hi George.
>
>
>
> We saw the same problem, running Apache Flink 1.19 and 1.20 images. The
> cause is that Flink image provides a JRE and you need JDK to build/install
> PyFlin
Hi George.
We saw the same problem, running Apache Flink 1.19 and 1.20 images. The cause
is that Flink image provides a JRE and you need JDK to build/install PyFlink.
And, oddly enough, I think it was only on ARM64 images. Amd64 was OK, I think.
So, Mac M1, M2, M3…
Our Docker file for
1 - 100 of 12444 matches
Mail list logo