Hi,

That's because the ConfigMap volume is always read-only.
Currently /docker-entrypoint.sh will try to update some configs in docker
Environment. But these are not needed in kubernetes.
So I think we can ignore those errors safely when using the operator/native
kubernetes integration.

There is already a ticket[1] to track this.

[1]https://issues.apache.org/jira/browse/FLINK-21383

Best,
Weihua


On Wed, Apr 5, 2023 at 1:32 AM Ivan Webber via user <user@flink.apache.org>
wrote:

>
>
> I’ve noticed that all jobs I start with `flink-operator` have the
> following message at the top of their logs for both jobmanager and
> taskmanager pods:
>
>
>
> ```
>
> sed: couldn't open temporary file /opt/flink/conf/sedRTwsr1: Read-only
> file system
>
> sed: couldn't open temporary file /opt/flink/conf/sedcDS30D: Read-only
> file system
>
> /docker-entrypoint.sh: line 73: /opt/flink/conf/flink-conf.yaml: Read-only
> file system
>
> /docker-entrypoint.sh: line 89: /opt/flink/conf/flink-conf.yaml.tmp:
> Read-only file system
>
> Starting kubernetes-application as a console application on host
> test-replay-run-b6458d699-nmfvf.
>
> ```
>
>
>
> It seems these failures are due to the flink Docker images’ entrypoint
> being run by a user without permissions to write to `/opt/flink/conf` (as
> part of `sed -i`) or pipe to files in that folder. However, I’ve made my
> own container based on the docker scripts and even ensuring that all files
> are owned by `flink:flink` with full read-write permissions when running
> with Flink-operator these messages show up. Accordingly, I’m wondering if
> this is a bug or just something to ignore (e.g. flink-operator initialized
> the files and locked them to prevent further changes). If they are just
> something to ignore, it might be good to add an argument to
> `/docker-entrypoint.sh` to skip running it so there aren’t confusing error
> messages.
>
>
>
> Thanks,
>
>
>
> Ivan
>

Reply via email to