;>>>
>>>>>> https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/
>>>>>>
>>>>>> Cheers,
>>>>>> Gyula
>>>>>>
>>>>>> On Sat, 3 Sep 2022 at 13:45, marco andreas
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> We are deploying a flink application cluster on k8S. Following the
>>>>>>> official documentation the JM is deployed As a job resource , however we
>>>>>>> are deploying a long running flink job that is not supposed to be
>>>>>>> terminated and also we need to update the image of the flink job.
>>>>>>>
>>>>>>> The problem is that the job is an immutable resource, we
>>>>>>> cant update it.
>>>>>>>
>>>>>>> So I'm wondering if it's possible to use a deployment resource for
>>>>>>> the jobmanager and if there will be any side effects or repercussions.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>> Gil De Grove
Hello,
I may be really wrong with this, but from what I get in the source file,
you are using a semi-column to separate the value.
This probably means that you should set the csv.field-delimiter to `;` to
make your example work properly.
Have you tried with that configuration in your create table
g the contained FileSystemFactory to use a different
> scheme and config keys. It's a bit annyoing, but it should work (now and in
> the future).
> Essentially you'd pretend that there are N completely different
> filesystems, but they are actually all the same implementation just w
that the initialization
> happens when the process is started.
>
> You'll need to run that job in a separate cluster.
>
> Overall, this sounds like something that should run externally; assert
> some precondition, then configure Flink appropriately, then run the job.
>
>
Hello everyone,
First of all, sorry for cross posting, I asked on SO, but David Anderson
suggested me to reach out to the community via the mailing list. The link
to the SO question is the following:
https://stackoverflow.com/questions/71381266/using-another-filesystem-configuration-while-creating
Hello,
We are currently developing a RichParallelSourceFunction<> that reads from
different FileSystem dynamically based on the configuration provided when
starting the job.
When running the tests, adding the hadoop-s3-presto library in the
classpath, we can run the workload without any issues.
H
nager-numberoftaskslots
>
> On Wed, Aug 25, 2021 at 9:52 AM Gil De Grove
> wrote:
>
>> Hello,
>>
>> We are struggling a bit with an error in our kubernetes deployment.
>>
>> The deployment is composed of 2 flink job managers and 58 task managers.
>>
Hi Jonas,
Just wondering, are you trying to deploy via iam service account
annotations in a AWS eks cluster?
We noticed that when using presto, the iam service account was using en ec2
metadata API inside AWS. However, when using eks service account, the API
used is the webtoken auth.
Not sure
Hello,
We are struggling a bit with an error in our kubernetes deployment.
The deployment is composed of 2 flink job managers and 58 task managers.
When deploying the jobs everything is going fine at first, but after the
deployment of several jobs (mix of batch and streaming job using the SQL
tab