Hi Gunnar.

Answer 1.

I am not sure which image we should be talking about – operator or Flink 
sessions cluster, since I do not know who is interpreting the job instructions.

Anyway, we are using an image we derive from Flink official image: 
flink:1.20.1-java17

Answer 2.

We are using 1.12, which was an upgrade of 1.10. Maybe the upgrade didn’t clean 
everything up properly. I can try to do a clean install, since the JIRA ticket 
you mention is also stating something along those lines: setting default-opts. 
Maybe an old configuration was passed over to the new operator.

Thanks for some hints and reassurances.

Nikola.


From: Gunnar Morling <gunnar.morl...@googlemail.com>
Date: Wednesday, June 11, 2025 at 2:19 PM
To: Nikola Milutinovic <n.milutino...@levi9.com>
Subject: Re: Problems running Flink job via K8s operator
Hey Nikola,

> Problem 1: The first problem is that the operator doesn’t know about “local” 
> URI schema used in jarURI.

"local" should work, e.g. see here for an example: 
https://github.com/apache/flink-kubernetes-operator/blob/main/examples/basic.yaml#L38.
 What's the container image you are using (the CR you shared doesn't tell)? It 
should be derived from the default Flink image.

> Problem 2: With jarURI dropped I get further, but now Java 9+ modules bite me.

Which version of the operator are you using? This should work OOTB as of 1.11 
or newer (see https://issues.apache.org/jira/browse/FLINK-36646). I ran into 
this myself on an earlier version, discussing it here (see the repo linked from 
that post for a workaround, based on that 1.11 fix): 
https://www.decodable.co/blog/get-running-with-apache-flink-on-kubernetes-1

Hth,

--Gunnar


On Wed, 11 Jun 2025 at 09:46, Nikola Milutinovic 
<n.milutino...@levi9.com<mailto:n.milutino...@levi9.com>> wrote:
Hello.

I have problems trying to run a Flink session job using Flink Kubernetes 
operator. Two problems, so far. This is the Spec I am trying to use:

apiVersion: flink.apache.org/v1beta1<http://flink.apache.org/v1beta1>
kind: FlinkSessionJob
metadata:
  name: nix-test
spec:
  deploymentName: flink-cluster-session-cluster
  job:
    jarURI: "local:///opt/flink/opt/flink-python-1.20.1.jar"
    entryClass: "org.apache.flink.client.python.PythonDriver"
    args:
      [
        "--python", "/opt/flink/workflows/CACHE/cache_valkey_updater.py",
        "--cdc_kafka_topic", "cdc-inspection-type",
        "--entity_type", "inspection-type",
        "--field_names", "Name_Enum"
      ]
    parallelism: 1
    state: running
    upgradeMode: savepoint

Problem 1: The first problem is that the operator doesn’t know about “local” 
URI schema used in jarURI.


Could not find a file system implementation for scheme ''local''. The scheme is

    not directly supported by Flink and no Hadoop file system to support this 
scheme

    could be loaded.

Is there something I should turn on in the config for this to be recognized?
In any case, I have read that for PyFlink jobs, this is just a placeholder and 
can be dropped. So I did.

Problem 2: With jarURI dropped I get further, but now Java 9+ modules bite me.


Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make

    field private final java.util.Map java.util.Collections$UnmodifiableMap.m 
accessible:

    module java.base does not \"opens java.util\" to unnamed module @4ba2ca36

    at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(Unknown 
Source)

    at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(Unknown 
Source)

    at java.base/java

I know that we must put a lot of those –add-opens to Flink config and I have 
them all. But this looks like it should be added to the Operator itself.

Any advice here?

Nikola.


Reply via email to