Hi Matt,

- In FlinkDeployments you can utilize an init container to download your
artifact onto a shared volume, then you can refer to it as local:/.. from
the main container. FlinkDeployments comes with pod template support
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/#pod-template

- FlinkSessionJobs comes with an artifact fetcher, but it may need some
tweaking to make it work on your environment:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/overview/#flinksessionjob-spec-overview

I hope it helps, let us know if you have further questions.

Cheers,
Matyas



On Tue, Jun 21, 2022 at 2:35 PM Matt Casters <matt.cast...@neotechnology.com>
wrote:

> Hi Flink team!
>
> I'm interested in getting the new Flink Kubernetes Operator to work on AWS
> EKS.  Following the documentation I got pretty far.  However, when trying
> to run a job I got the following error:
>
> Only "local" is supported as schema for application mode. This assumes t
>> hat the jar is located in the image, not the Flink client. An example of
>> such path is: local:///opt/flink/examples/streaming/WindowJoin.jar
>
>
>  I have an Apache Hop/Beam fat jar capable of running the Flink pipeline
> in my yml file:
>
> jarURI: s3://hop-eks/hop/hop-2.1.0-fat.jar
>
> So how could I go about getting the fat jar in a desired location for the
> operator?
>
> Getting this to work would be really cool for both short and long-lived
> pipelines in the service of all sorts of data integration work.  It would
> do away with the complexity of setting up and maintaining your own Flink
> cluster.
>
> Thanks in advance!
>
> All the best,
>
> Matt (mcasters, Apache Hop PMC)
>
>

Reply via email to