Hi Maxim, You need to add the s3 filesystem in the Flink plugins directory in the operator to be able to work with S3, this is similar to any other Filesystem and similar to how Flink itself works. Flink offers 2 S3 filesystem implementations - flink-s3-fs-hadoop[1] for extension s3a://*** - flink-s3-fs-presto[2] for extensions s3://***
One easy way of doing so is to repackage the image yourself as in ``` FROM --platform=linux/amd64 apache/flink-kubernetes-operator:1.9.0 ENV FLINK_VERSION=1.17.2 ## Maybe curl GET the filesystem dependency first ADD flink-s3-fs-hadoop-$FLINK_VERSION.jar /opt/flink/plugins/flink-s3-fs-hadoop/ ``` 1- https://mvnrepository.com/artifact/org.apache.flink/flink-s3-fs-hadoop 2- https://mvnrepository.com/artifact/org.apache.flink/flink-s3-fs-presto Best Regards Ahmed Hamdy On Fri, 2 Aug 2024 at 00:05, Maxim Senin via user <user@flink.apache.org> wrote: > When will Flink Operator support schemas other than `local` for > application deployment jar files? I just tried flink operator 1.9 and it’s > still not working with `s3` locations. If s3 is good for savepoints and > checkpoints, why can’t the jar also be on s3? > > Thanks, > Maxim > > ------------------------------ > > COGILITY SOFTWARE CORPORATION LEGAL DISCLAIMER: The information in this > email is confidential and is intended solely for the addressee. Access to > this email by anyone else is unauthorized. If you are not the intended > recipient, any disclosure, copying, distribution or any action taken or > omitted to be taken in reliance on it, is prohibited and may be unlawful. >