Hi Nikola,
Good point. We should follow up on making Flink fully compatible with JDK17. If
fact, I found that some of the module exports suggested in the default
flink-conf file are already not necessary. For example,
“--add-exports=java.base/sun.net.util=ALL-UNNAMED” is no longer needed since
Hi,
Are you deploying the job in Session mode? Underneath, Flink distinguishes
cluster-level and job-level configs. For Application mode, the two are unified.
When a job is submitted to the session cluster though, the values of
cluster-level config options, such as the memory for JM and TM, wil
Hi,
In short, yes if without user-defined functions. For UDFs, you'll have to
ensure that it does cache data internally (maintain a local hash map for
example), otherwise downstream ops may change the cached data and breaks data
integrity.
Best,
Zhanghao Chen
F
additional info stack trace:
in flink 18 is saw the following and it looks like the resourceVersion is
the correct one
2025-01-22 20:02:37,771 DEBUG
org.apache.flink.kubernetes.shaded.io.fabric8.kubernetes.client.informers.impl.DefaultSharedIndexInformer
[] - Resync skipped due to 0 full resync per
I saw there was an upgrade in flink 1.19 to 6.9.2 where in flink 1.18
6.6.2 was used.
when running the same jar with flink 18 it works ok.
Was there any additional configuration required for version 19 due to this
upgrade?
thanks again
On Wed, Jan 22, 2025 at 3:24 PM Sigalit Eliazov wrote:
> H