reflection.
Rgds,
KAmal
From: Kamal Mittal via user
Sent: 23 September 2025 19:03
To: Schwalbe Matthias ; user@flink.apache.org
Subject: RE: Flink kafka source with Avro Generic Record
Thanks for such a nice explanation.
1. Kryo is not used now as per attached classes (custom type information
cc @Hongshun
tomi nader 于2025年9月18日周四 22:13写道:
> Hello Flink community,
>
> I am encountering an issue when trying to run a SQL job on *Flink 2.0.0
> (standalone, macOS)* with a *Kafka sink (Docker).*
>
> *Setup:*
>
>-
>
>Flink 2.0.0 install
Hi:
After enabling Flink’s HistoryServer, we observed that different ways of
stopping a running job lead to different results in the HistoryServer:
If we use cancel or stop with savepoint, in most cases the HistoryServer
can display the basic information of the job (for example checkpoint
status,
: Tuesday, September 23, 2025 5:05 AM
To: user@flink.apache.org
Subject: [External] RE: Flink kafka source with Avro Generic Record
⚠EXTERNAL MESSAGE – CAUTION: Think Before You Click ⚠
Can someone please give input for below?
From: Kamal Mittal mailto:kamal.mit...@ericsson.com>>
Sent: 22 Sep
The Apache Flink community is very happy to announce the release of Apache
Flink CDC 3.5.0.
Apache Flink CDC is a distributed data integration tool for real time data
and batch data, bringing the simplicity and elegance of data integration
via YAML to describe the data movement and transformation
Hello everyone,
I’m currently testing the 1.13 version of the Flink operator by building it
from source and deploying it to my cluster. I need this version of the operator
since it supports Flink 2.1. From what I can tell everything is working great
in this snapshot version.
I wanted to
,
>
> I’m currently testing the 1.13 version of the Flink operator by building
> it from source and deploying it to my cluster. I need this version of the
> operator since it supports Flink 2.1. From what I can tell everything is
> working great in this snapshot version.
>
> I wanted
y, Peter
>
> Very detailed issue description, there’re two possible reasons which may
> lead to current case from my understanding.
>
> (1) The network issue, could you check network from Flink to MySQL via
> telnet192.168.10.32 3306 in TaskManager?
> (2) The Higher MySQL ve
Hi Francis,
You could inherit from the original `KeyedProcessFunction` to process async
state. Please do remember to use the ForSt Statebackends, which is the only
one that supports asynchronous state access. If your state size is huge,
you will get the benefit.
The `DeclaringAsyncKeyedProcessFun
Hello everyone,
I have a question about using the Async state APIs in a KeyedProcessFunction.
I’m defining a pipeline like this,
keyBy(KeySelector()).enableAsyncState().process(MyKeyedProcessFunction()).
Inside MyKeyedProcessFunction I’m using the
org.apache.flink.api.common.state.v2.ValueStat
emulate a Kafka source, i.e. byte[] is ok here,
however, you don’t to use byte[] in any other place throughout and use Flink
API to your favor …
* E.g. your DelayFunction can work directly with GenericRecord instead of
byte[]:
* Line 206, Line 216 … in that you don’t neet to serialize
Hi everybody,
I am trying to replicate a MySQL database into StarRocks using a Flink CDC
pipeline. The Flink job loads correctly, starts the incremental snapshot
and transfers multiple GB (sometimes even multiple hundred GBs) into the
sink DB. Then, after an arbitrary duration of time (it ranges
Hello,
I tried this and Flink fails later, when it tries to serialize/deserialize the
GenericRecord object for communication between operators (e.g. from map() to
another operator, or writing checkpoints, or shuffling).
it's a serialization issue during operator chaining or data exchan
Hello,
I need to support Flink application accepting avro binary events with different
schemas over flink kafka source. Need to use custom schema registry to fetch
schema at runtime and decode the incoming event.
Will use Avro Generic Record to decode incoming event with different avro
Will check
Pretty sure I did a blog that does exactly this.
Sent from my iPhone
George Leonard
__
george...@gmail.com
+27 82 655 2466
> On 12 Sep 2025, at 11:29, Javad Saljooghi wrote:
>
>
> Hi,
>
> I deployed Flink on Kubernetes (running on AWS EC
Hi Nic,
It would be good to get some clarity on those classes. Several methods from
DataTypeUtils were used in Flink Connector Kafka and when I migrated the
connector to Flink 2.1 [1] I had to deal with the fact that some of those
methods had been removed [2] without first being deprecated
Hi all / Hong.
Slight contradiction picked up, currently Flink CDC 3.4 (Postgres) is not
compatible with Flink 2.x
https://nightlies.apache.org/flink/flink-cdc-docs-release-3.4/docs/connectors/flink-sources/overview/#supported-flink-versions
So the setting above wont make a difference.
G
On
'adults'
,'slot.name' = 'adults0'
,'scan.incremental.snapshot.enabled' = 'true' -- experimental feature:
incremental snapshot (default off)
,'scan.startup.mode' = 'initial' --
https://nightlies.apache.
Hello Flink community,
I am encountering an issue when trying to run a SQL job on *Flink 2.0.0
(standalone, macOS)* with a *Kafka sink (Docker).*
*Setup:*
-
Flink 2.0.0 installed locally (~/apps/flink-2.0.0)
-
Kafka/Zookeeper running via Docker Compose
-
Job defined in
Flink 1.20.x does not allow me to unpack/repack the json as a string into
the array of rows as a complex structure...
my inbound json payload is packed into a column called "data" as type
jsonb...
with the 1.20.x functionality I can't unpack jsonify it... and the
functionality ne
Hi,
I am trying to figure out why AutoScaler is not working. I have deployed my
Flink job on Yarn using AWS EMR.
My configurations are:
job.autoscaler.enabled: 'true'
job.autoscaler.decision.interval: 10m
job.autoscaler.stabilization.interval: 50m
job.autoscaler.vertex.min-paral
#x27;slot.name' = 'adults0'
,'scan.incremental.snapshot.enabled' = 'true' -- experimental feature:
incremental snapshot (default off)
,'scan.startup.mode' = 'initial' --
https://nightlies.apache.org/flink/flink-cdc-docs-release-3.1/docs/connectors/fli
, Hongshun Wang wrote:Hi George,Please use ,'scan.incremental.snapshot.enabled' = 'true' . The old SouceFunction has been removed in flink 2.0. Otherwise, you can use flink 1.20.Best,HongshunOn Sun, Sep 14, 2025 at 11:48 PM George <george...@gmail.com> wrote:Hi all...
Hi George,
Please use ,'scan.incremental.snapshot.enabled'= 'true' . The old
SouceFunction has been removed in flink 2.0. Otherwise, you can use flink
1.20.
Best,
Hongshun
On Sun, Sep 14, 2025 at 11:48 PM George wrote:
> Hi all...
>
> Below is the jars includ
Hi,
We are in the process of upgrading our Flink applications from Flink 1.20.2
to Flink 2.0.
Our application is running in production environments on various sites.
We have not found a working/reliable procedure for migrating the savepoints
from Flink 1.20.2 to Flink 2.0, even though we
Dear users,
is there a plan for an official release (1.12.3) for the Kubernetes
operator any time soon? We have hit a roadblock for migrating the Flink
workload from Kubernetes 1.32 to 1.33, see
https://lists.apache.org/thread/1xk9s1bzxwt3nqk1vs817t4crw9lo3nw.
We will run our own custom version
Hi all...
Below is the jars included in my flink 2.0 build. and then the catalog
create and the query... If I drop flink to 1.20.2 and associated jars then
all works, but for 2.0 I'm a bit stuck...
*Dockerfile*
RUN echo "--> Install JARs: Flink's S3 plugin" &&
t;
> Sent from my iPhone
>
> George Leonard
> __
> george...@gmail.com
> +27 82 655 2466
>
>>> On 12 Sep 2025, at 11:29, Javad Saljooghi wrote:
>>>
>>
>> Hi,
>>
>> I deployed Flink on Kubernetes (running on
Hi all.
It seems that we have problems when we try to create a group of Flink Session
Jobs, the Operator first runs into some timeouts, and most jobs go into
RECONCILING state. Then they do manage to get to running state, but when we
inspect Flink UI, we can see that some of the jobs are
Hi,
I deployed Flink on Kubernetes (running on AWS EC2) using the Flink K8s
operator v1.2.0 and a custom Dockerfile based on the image flink:2.0.0-java21.
My messages are in Kafka (Confluent Cloud), serialized with Protobuf and
registered in the Confluent Schema Registry.
I was able to connect
Hi, there are two useful utilities classes:
*
https://nightlies.apache.org/flink/flink-docs-release-1.20/api/java//org/apache/flink/table/types/utils/DataTypeUtils.html
*
https://nightlies.apache.org/flink/flink-docs-release-1.20/api/java//org/apache/flink/table/types/logical/utils
Hi Flink Community,
I'm trying to use the Apache Flink OpenSearch connector to sink data into
an AWS OpenSearch Serverless collection.
I've made some attempts, but my Flink job consistently fails with a 403
Forbidden error when trying to connect to the OpenSearch endpoint. This
makes
the operator logic to try to figure out
what's going on. Not sure how to mitigate it unless we know what causes it.
Cheers
Gyula
On Wed, Sep 10, 2025 at 5:25 PM Nikola Milutinovic
wrote:
> Hi all.
>
> It seems that we have problems when we try to create a group of Flink
>
Hello everyone.
I've been using Flink much to my pleasure. However, when trying to metric
my application I have encountered hardships. From what I've seen the
Prometheus exporter is reporting Flink's Histogram metric as a
Summary. Unfortunately, this is a deal breaker for me when
Hi Flink Community,
I'd like to share *flink-mcp*, an MCP (Model Context Protocol) server that
connects Apache Flink with AI assistants and other MCP-compatible tools.
*What it does:*
flink-mcp enables AI assistants to interact directly with your Flink
clusters through the SQL Gateway RES
Hi all
I am using a postgres store atm via a jdbc connector to create my type =
Paimon catalog, as per below, this is for my outbound data flow.
CREATE CATALOG c_paimon WITH (
'type' = 'paimon'
,'metastore' = 'jdbc'
,'catalog-key' = 'jdbc'
-- PostgreSQL JDBC connection
,'uri' =
'jdbc:postgresql:/
ast-once*.
>>>
>>> From my understanding, exactly-once might not really fit this model.
>>> IMO, to make exactly-once work, the sink has to stop at each
>>> checkpoint, wait for all outstanding requests to finish, and force writes
>>> to happen in order
rk, the sink has to stop at each checkpoint, wait
>> for all outstanding requests to finish, and force writes to happen in order
>> so you can clearly separate "records before this checkpoint" from "records
>> after it." At that point you’re basically running sync
m "records
> after it." At that point you’re basically running synchronously, so the
> whole async throughput benefit is gone.
>
> For systems that actually support transactions (Kafka EOS, JDBC XA, File
> sinks, etc.), Flink has separate *committing sink* designs that are
point you’re basically running synchronously, so the
whole async throughput benefit is gone.
For systems that actually support transactions (Kafka EOS, JDBC XA, File
sinks, etc.), Flink has separate *committing sink* designs that are a
better fit.
That’s my read of it — but I might be missing some
Hi
Reading the AsyncSinkBase source code, I encountered a section of
limitations.
One of the points is "We are not considering support for exactly-once
semantics at this point."
Can someone share the reason for this? Any plans to develop it in the
future?
Regards,
Mark
Ok thanks.
From: Kyle Lahnakoski
Sent: 25 August 2025 18:51
To: Kamal Mittal ; user@flink.apache.org
Subject: Re: Sharing data among flink operator pipeline
You don't often get email from kyle.lahnako...@xe.com. Learn why this is
important<https://aka.ms/LearnAboutSenderIdentificatio
operators to
execute on the same task thread (operator chain), avoid shuffles (keyBy,
rebalance, etc.) and keep parallelism equal so Flink uses forward partitioning
and chaining.
From: Kamal Mittal via user
Date: Monday, August 25, 2025 at 7:57 AM
To: Nikola Milutinovic , user
No, I am not talking about Kafka Message Key. Rather key/value pair which comes
separately as below.
Headers
org.apache.kafka.clients.consumer.ConsumerRecord.headers()
From: Nikola Milutinovic
Sent: 25 August 2025 14:56
To: user@flink.apache.org
Subject: Re: Sharing data among flink operator
unsubscribe
原始邮件
发件人:Nikola Milutinovic
: Thursday, August 21, 2025 at 11:35 AM
To: user@flink.apache.org
Subject: Sharing data among flink operator pipeline
Hello,
I have a scenario like for kafka source where kafka headers also come along
with kafka record/event. Kafka headers fetched need to share/pass to
next/parallel operators in
The Apache Flink community is very happy to announce the release of
Apache flink-connector-kafka 4.0.1.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.
The release is available for download
unsubscribe
原始邮件
发件人:Pedro Mázala
Hello there, Kamal!
Whenever I need to keep passing data downstream, I reflect it on my POJO.
Usually, I call those fields metadata and access them downstream. The state
will also segment data by operator, in other words, an operator cannot
access stored data from other operators (unless you use b
Hello,
I have a scenario like for kafka source where kafka headers also come along
with kafka record/event. Kafka headers fetched need to share/pass to
next/parallel operators in pipeline.
So, is there any way to share data across operator pipeline?
Explored keyed state which has limitation th
To the community,
Our Flink task managers are currently experiencing frequent swapping in and
out due to Karpenter autoscaling. This is causing task managers to
repeatedly encounter RemoteTransportException (RTE), leading to job
failures.
We are looking for the ideal configuration to prevent
rowse/JDK-8275863
Best,
Zhanghao Chen
From: Xeno Amess
Sent: Saturday, August 9, 2025 22:07
To: Zhanghao Chen
Cc: user@flink.apache.org
Subject: Re: What is the best way to play with json in flink?
I notice you mean especially in jdk 17+, is there any more detail
I notice you mean especially in jdk 17+, is there any more details about it?
right now we be using flink 1.14 with jdk 8, so I do have a plan to do a
whole upgradation(means to update to flink 2.1 with jdk 24/25/graalvm 24).
Zhanghao Chen 于2025年8月9日周六 21:56写道:
> We find fastjson2 much fas
We find fastjson2 much faster than Jackson, especially with JDK17+.
Best, Zhanghao
From: Xeno Amess
Sent: Friday, August 8, 2025 6:17:31 PM
To: user@flink.apache.org
Subject: What is the best way to play with json in flink?
right now we use fastjson2 (in
To the community,
Our Flink task managers are currently experiencing frequent swapping in and
out due to Karpenter autoscaling. This is causing task managers to
repeatedly encounter RemoteTransportException (RTE), leading to job
failures.
We are looking for the ideal configuration to prevent
right now we use fastjson2 (in Collector, Source, etc), but as there be an
embedded jackson provided in flink, we doubt if it be worthy to migrate to
using it instead.
we take care of performance very much so if there be some performance
benchmark please let us know...
Also I'm kind of con
Thanks to the community guys anyway.
Xeno Amess 于2025年8月8日周五 18:17写道:
> right now we use fastjson2 (in Collector, Source, etc), but as there be an
> embedded jackson provided in flink, we doubt if it be worthy to migrate to
> using it instead.
> we take care of performance very much
; > On Thu, Jul 31, 2025 at 8:03 PM Lincoln Lee
> > > wrote:
> > >
> > >> Congratulations! Thanks for driving this!
> > >>
> > >> Best,
> > >> Lincoln Lee
> > >>
> > >>
> > >> Leonard Xu 于2025年7月3
; >>
> >> Leonard Xu 于2025年7月31日周四 15:36写道:
> >>
> >> > Nice! Thanks Ron and all involved for the great work.
> >> >
> >> > Best,
> >> > Leonard
> >> >
> >> > > 2025 7月 31 14:30,Ron Liu 写道:
> >> >
Hi folks,
Wanted to invite you all to a public webinar we are holding next week -
https://www.linkedin.com/events/7357004263773872128/
In this webinar, we’ll share practical insights and proven strategies for
monitoring Flink clusters, identifying bottlenecks, and optimizing resource
usage. We
ulations! Thanks for driving this!
>>
>> Best,
>> Lincoln Lee
>>
>>
>> Leonard Xu 于2025年7月31日周四 15:36写道:
>>
>> > Nice! Thanks Ron and all involved for the great work.
>> >
>> > Best,
>> > Leonard
>> >
>> &
ll involved for the great work.
> >
> > Best,
> > Leonard
> >
> > > 2025 7月 31 14:30,Ron Liu 写道:
> > >
> > > The Apache Flink community is very happy to announce the release of
> > Apache
> > > Flink 2.1.0, which is the first rel
The Apache Flink community is very happy to announce the release of Apache
Flink 2.1.0, which is the first release for the Apache Flink 2.1 series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
seems there be FLINK-33326
is there people handling it now?
if not, I would like to have a try...
Xeno Amess 于2025年7月30日周三 18:49写道:
> Hi.
> I see the `fromCollection` impl, and find out it actually use
> `FromIteratorFunction`, which be a `SourceFunction`, witch be
> deprecated&a
Hi.
I see the `fromCollection` impl, and find out it actually use
`FromIteratorFunction`, which be a `SourceFunction`, witch be
deprecated&internal.
I wonder if it be valuable to create a class like this, but using the new
`Source` interface.
hi,
Which verison do you use? Could you share the whole exception stack?
Best,
Shengkai
李 琳 于2025年7月8日周二 23:06写道:
>
> Hi all,
>
> I used the following code to create an applicationConfiguration for
> submitting a Flink job to a YARN cluster in application mode:
>
> *App
ion of 2.2.
https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details
about the 2.1 release.
On 23 Jul 2025, at 12:42, Xeno Amess wrote:
(though I know we don't really follow semver at backward compa, at least we can
try the 3-parts version number...like 2.1.0...or
k on several versions at the same time. You
>> were looking at the 2.1 notes in a snapshot version of 2.2.
>>
>> https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides
>> details about the 2.1 release.
>>
>>
>>
>> On 23 Jul 2025, at 12:4
i.apache.org/confluence/display/FLINK/2.1+Release provides
> details about the 2.1 release.
>
>
>
> On 23 Jul 2025, at 12:42, Xeno Amess wrote:
>
> (though I know we don't really follow semver at backward compa, at least
> we can try the 3-parts version number...like 2.1.0...or
Hello all.
Just wanted to give a heads-up that Flink 1.20 images based on Java 17 have
problems with launching jobs via Flink Kubernetes Operator. Those based on Java
11 work.
We are running a Flink Session cluster on Kubernetes, deploying it using Flink
K8s Operator. Our session cluster was
The project can of course work on several versions at the same time. You were
looking at the 2.1 notes in a snapshot version of 2.2.
https://cwiki.apache.org/confluence/display/FLINK/2.1+Release provides details
about the 2.1 release.
> On 23 Jul 2025, at 12:42, Xeno Amess wr
Flink 1.20.2 tgz package with oracle-cdc 3.4 and ojdbc17.jar downloaded
from official website.
Here are my steps:
-- Step 1: Setup XE to enable archive logs
alter system set db_recovery_file_dest_size = 10G;
alter system set db_recovery_file_dest =
'/opt/oracle/oradata/recovery_area
Hey,
I'm trying to run Flink K8S Operator with version 1.12. The Flink version
I'm trying to use is 1.17.1, generally the deployment looks & work
correctly, however if at any point the job gets restarted because of the
exception, the operator starts can't correctly deserial
Hi Flink community,
We've encountered a compatibility issue with the Flink Kubernetes operator
on Kubernetes 1.33 clusters. The operator fails reconciliation with the
following error:
Unrecognized field "emulationMajor" (class
io.fabric8.kubernetes.client.VersionInfo),
not marked
Hi patricia,
```setPredefinedOptions ``` can still be found in EmbeddedRocksDBStateBackend
in Flink 2.0.
You can set the option programmatically via config:
config.set(
RocksDBOptions.PREDEFINED_OPTIONS,
PredefinedOptions.FLASH_SSD_OPTIMIZED.name());
Best,
Han Yin
> 2025年7月15日 13
Hi,
We are migrating our Flink 1.20 to Flink 2.0.
We have set the code in our project like this:
RockDBStateBackend rockDb = new RockDBStateBackend(config.getPath() +
"/myrockdbpath");
rockdb.setPredefinedOptions(PredefinedOptions.FLASH_SSD_OPTIMIZED)
We followed the new way in Fl
Congratulations to everyone involved!
Thank you Ferenc for managing the release!
On Fri, Jul 11, 2025 at 1:15 PM Ferenc Csaky wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.20.2, which is the second bugfix release for the Apache F
Congratulations to everyone involved!
Thank you Ferenc for managing the release!
On Fri, Jul 11, 2025 at 1:13 PM Ferenc Csaky wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.19.3, which is the third bugfix release for the Apache F
The Apache Flink community is very happy to announce the release of Apache
Flink 1.20.2, which is the second bugfix release for the Apache Flink 1.20
series.
Apache Flink® is an open-source stream processing framework for distributed,
high-performing, always-available, and accurate data streaming
The Apache Flink community is very happy to announce the release of Apache
Flink 1.19.3, which is the third bugfix release for the Apache Flink 1.19
series.
Apache Flink® is an open-source stream processing framework for distributed,
high-performing, always-available, and accurate data streaming
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.12.1.
The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.
The release is available for download at
Yes, it's possible. You may refer to the example here:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/types_serialization/#most-frequent-issues
Best,
Zhanghao Chen
From: patricia lee
Sent: Thursday, Ju
ew config
> pipeline.serialization-config [1]. I've created a new JIRA issue [2] to fix
> the doc.
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config
> [2] https://issues.apache.org/jira/browse/FLINK
Hi Patricia,
You may register the type using the new config pipeline.serialization-config
[1]. I've created a new JIRA issue [2] to fix the doc.
[1]
https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/deployment/config/#pipeline-serialization-config
[2] https://issues.apach
Hi,
We are currently migrating from Flink version 1.18 to Flink version 2.0.
We have this configuration:
StreamExecutionEnvironment env = new StreamExecutionEnvironment();
env.setRegisterTypes(MyClass.class);
In flink 2.0, if our understanding is correct, we'll use this registerPoj
Hi,
We are currently migrating our flink projects from version 1.18 to 2.0.
We have this part of the codes that we set the model in the
StreamExecutionEnvironment env = new StreamExecutionEnvironment();
env.setRegisterTypes(MyClass.class);
Right now in Flink 2.0, I followed this
Hi all,
I used the following code to create an applicationConfiguration for submitting
a Flink job to a YARN cluster in application mode:
ApplicationConfiguration applicationConfiguration =
new ApplicationConfiguration(userProgramArguments,
userMainClass);
However, I
t;4.0.0-2.0"
lazy val flink_simple_testing = project
.in(file("flink_simple_testing"))
.settings(
name := "flink-testing",
libraryDependencies ++= Seq(
"org.apache.flink" % "flink-core" % flinkVersion,
"org.apache.flink" % &
ere:
https://nightlies.apache.org/flink/flink-docs-release-2.0/docs/dev/datastream/overview/
Updated slightly to this:
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStre
Hi
I just wanted to know if we have to do some special configurations at the
infrastructure provider to scale up the job. For example if I add more
nodes in my node pool of my k8s cluster, the task managers are not
automatically scaled up.
Now I have also run Flink on AWS EMR where it uses Yarn
Dear Apache Flink Community,
I hope you're doing well.
We are currently operating a Flink deployment on *Kubernetes*, with *high
availability (HA) configured using Kubernetes-based HA services*. We're
exploring approaches for *geo-redundancy (GR)* to ensure disaster recovery
and fault
Hey Gyula! I created
https://issues.apache.org/jira/browse/FLINK-38033
I will add logs / provide more details when I have time...
On Mon, Jun 30, 2025 at 6:18 PM Gyula Fóra wrote:
> Hey!
>
> Could you please open a JIRA ticket with the operator logs so we can
> investigate?
>
&
{
"emoji": "👍",
"version": 1
}
rg/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq
>
> Regards,
>
> Salva
>
> On 2025/06/27 15:21:47 Salva Alcántara wrote:
> > I was running Flink Kubernetes Operator v1.11 with a minor tweak, see
> more
> > here:
> > -
> >
> https://www.project-syndicate.org/comm
Sorry, I meant this link (for the operator tweak which I indeed applied in
v1.10, not v.11):
- https://lists.apache.org/thread/b9gkso9wvwp2s19dn7s1ol5b9okbbtwq
Regards,
Salva
On 2025/06/27 15:21:47 Salva Alcántara wrote:
> I was running Flink Kubernetes Operator v1.11 with a minor tweak,
I was running Flink Kubernetes Operator v1.11 with a minor tweak, see more
here:
-
https://www.project-syndicate.org/commentary/ai-productivity-boom-forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05
and everything was fine. However, after upgrading to v1.12, the operator is
After upgrading the Flink Kubernetes Operator from v1.11 to v1.12 upgrades
started to fail in all my jobs with the following error message:
```
Error during event processing ExecutionScope{ resource id:
ResourceID{name='my-job-checkpoint-periodic-1741010907590',
namespace='plat
hi all
ok, so I'm using the follow files in my parent pom file.
The flink source connectors intention is to create snmp agents/targets.
user flink sql to define a table,
that will be scraped. at a interval specified.
1.20.1
17
3.11.0
3.5.2
${java.version}
${java.version}
UTF-8
1.7.36
2
Hi all,
In case it helps someone in the future, I wanted to share that I was
finally able to identify the problem and create a fix that works for me.
As mentioned in earlier messages, the default Avro behavior in Flink
generates a schema based on the table definition. This includes some
Following up on a proposal I shared in the dev thread
<https://lists.apache.org/thread/sov9vg1jl0tnv4d1857sk69s96dcnwq1> — we’re
working on externalizing and modernizing the Flink-Pinot connector
(currently in Apache Bahir, which is now in the Attic). The plan includes:
-
Movi
1 - 100 of 12502 matches
Mail list logo