Hi Deniz,

Great to hear from someone using Ververica Platform with StateFun.
When deploying your job you can specify `additionalConfigurations`[1]
that are also pulled and put into the classpath.

Hopefully, that is suitable for your scenario.

Best,
Fabian

[1] 
https://docs.ververica.com/user_guide/application_operations/deployments/artifacts.html?highlight=additionaldependencies

On Fri, Dec 3, 2021 at 4:51 PM Deniz Koçak <lend...@gmail.com> wrote:
>
> Hi Igal,
>
> We are using official images from Ververica as the Flink installation.
> Actually, I was hoping to specify the name of file names to use during
> the runtime via `mainArgs` in the deployment configuration (or any
> other way may be). By this way we can specify the target yaml files,
> but I think this is not possible?
>
> =======================
> kind: JAR
> mainArgs: '--active-profile nxt'
> =======================
>
> Therefore, it's easier to use single jar in our pipelines instead of
> creating a different jar file for each env. (at least for development
> and production).
>
> For solution 2, you refer flink distro. , like /flink/lib folder in
> the official Docker image?
>
> Thanks,
> Deniz
>
> On Fri, Dec 3, 2021 at 3:06 PM Igal Shilman <i...@apache.org> wrote:
> >
> > Hi Deniz,
> >
> > StateFun would be looking for module.yaml(s) in the classpath.
> > If you are submitting the job to an existing Flink cluster this really 
> > means that it needs to be either:
> > 1. packaged with the jar (like you are already doing)
> > 2. be present at the classpath, this means that you can place your 
> > module.yaml at the /lib directory of your Flink installation, I suppose 
> > that you have different installations in different environments.
> >
> > I'm not aware of a way to submit any additional files with the jar via the 
> > flink cli, but perhaps someone else can chime in :-)
> >
> > Cheers,
> > Igal.
> >
> >
> > On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <lend...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> We have a simple stateful-function job, consuming from Kafka, calling
> >> an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
> >> the result back via Kafka again.
> >>
> >> * We created a jar file to be deployed on a standalone cluster (it's
> >> not a docker Image), therefore we add `statefun-flink-distribution`
> >> version 3.0.0 as a dependency in that jar file.
> >> * Entry class in our job configuration is
> >> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
> >> simply keep a single module.yaml file in resources folder for the
> >> module configuration.
> >>
> >> My question here is, we would like to deploy that jar to different
> >> environments (dev. and prod.) and not sure how we can pass different
> >> module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
> >> to the job during startup without creating separate jar files for
> >> different environments?
> >>
> >> Thanks,
> >> Deniz

Reply via email to