Hi Rajat,
Flink releases are compiled with JDK 8 but it is able to run on JDK 8-17. As
long as your Flink runs on JDK17 on both server and client side, you are free
to write your Flink jobs with Java 17.
Best,
Zhanghao Chen
From: Rajat Pratap
Sent: Tuesday, May
> Flink CDC [1] now provides full-DB sync and schema evolution ability as a
pipeline job.
Ah! That is very cool.
> Iceberg sink support was suggested before, and we’re trying to implement
this in the next few releases. Does it cover the use-cases you mentioned?
Yes! That would be fantastic.
Dear Biao Geng,
thank you for your reply. You are right, the statefun metrics are tracked along with the "normal" Flink metrics, I just could not find them.
If anyone is interested, flink_taskmanager_job_task_operator_functions___ is the way to get them.
Thanks again.
Best regards,
Oliver
Hello,
I've also noticed this in our Argo CD setup. Since priority=0 is the
default, Kubernetes accepts it but doesn't store it in the actual resource,
I'm guessing it's like a mutating admission hook that comes out of the box.
The "priority" property can be safely removed from the CRDs.
Regards,
Hi Team,
I am writing flink jobs with latest release version for flink (1.18.1). The
jobmanager is also deployed with the same version build. However, we faced
issues when we deployed the jobs.
On further investigation, I noticed all libraries from flink have build jdk
1.8. I have the following q