Thanks Kim, I got solution, we need to downgrade controller-gen@v0.2.4 to
make this working
But thanks a lot
On Monday, October 18, 2021, 11:46:23 PM EDT, Youngwoo Kim (김영우)
wrote:
Hi Dhiru,
Take a look at this flink operator,
https://github.com/spotify/flink-on-k8s-operatorThe
hi , I was planning to install Flink using k8sOperator for EKS version
1.20GitHub - GoogleCloudPlatform/flink-on-k8s-operator: Kubernetes operator for
managing the lifecycle of Apache Flink and Beam applications.
|
|
|
| | |
|
|
|
| |
GitHub - GoogleCloudPlatform/flink-on-k8s-oper
sorry , there was issue with path of s3 bucket, Got this fixed ..
Sorry for troubling you guys On Sunday, October 10, 2021, 12:33:16 PM EDT,
Dhiru wrote:
We have configured s3 bucket s3a://msc-sandbox-test-bucketI am not sure how
come some extra characters get added for a bucket
We have configured s3 bucket s3a://msc-sandbox-test-bucketI am not sure how
come some extra characters get added for a bucket?
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path
in absolute URI:
s3a://msc-sandbox-test-bucket3TjIvqnUkP1YBpoy.3MxSF/3TjIwLWrI71fbMZmGYK7r
ubmit with 1.14 in application mode
That is the first obvious observation
On Tue, Oct 5, 2021 at 5:35 AM Dhiru wrote:
My DockerFile
FROM flink:1.13.2-scala_2.12-java11
RUN mkdir -p /opt/flink/plugins/flink-s3-fs-hadoopRUN ln -fs
/opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-
My DockerFile
FROM flink:1.13.2-scala_2.12-java11
RUN mkdir -p /opt/flink/plugins/flink-s3-fs-hadoopRUN ln -fs
/opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/.
RUN mkdir -p /opt/flink/plugins/flink-s3-fs-prestoRUN ln -fs
/opt/flink/opt/flink-s3-fs-presto-*.jar /opt
Thanks Dawid, if I am not exposing UI , how I am going to run my job we
need to submit jar someway, I do not want my flink image tightly coupled with
my jar
On Monday, October 4, 2021, 09:52:31 AM EDT, Dawid Wysakowicz
wrote:
Hi Dhiru,
For the question about auto scaling
Hi ,
My requirement is to create Flink cluster application Mode on k8s and do not
want to expose UI, my requirement is to start the long-running job which can
be instantiated at boot time of flink and keep running
use these resource files from jobmanager-application-ha.yaml and
taskmanager-j
Thanks I got this working On Wednesday, September 29, 2021, 12:12:17 AM
EDT, Dhiru wrote:
I am following this link for setting up HA configuration ZooKeeper HA Services
|
|
| |
ZooKeeper HA Services
ZooKeeper HA Services # Flink’s ZooKeeper HA services use ZooKeeper for high
I am following this link for setting up HA configuration ZooKeeper HA Services
|
|
| |
ZooKeeper HA Services
ZooKeeper HA Services # Flink’s ZooKeeper HA services use ZooKeeper for high
availability services. Flink lever...
|
|
|
zookeeper version which I am using is 3.4.10
high
We need to overwrite using
WebIdentityTokenFileCredentialsProviderhttps://github.com/aws/aws-sdk-java-v2/issues/1470#issuecomment-543601232.
otherwise java takes presidency to secret key and access keys than SA
On Saturday, September 25, 2021, 04:37:22 PM EDT, Xiangyu Su
wrote:
Hi
please let me know if anyone can help me on this On Friday, September 24,
2021, 01:45:39 PM EDT, Dhiru wrote:
spec: replicas: 1 selector: matchLabels: app: flink component:
jobmanager template: metadata: labels: app: flink
component: jobmanager
spec: replicas: 1 selector: matchLabels: app: flink component:
jobmanager template: metadata: labels: app: flink
component: jobmanager spec: serviceAccountName: msc-s3-shared-content
containers: - name: jobmanager image: test:latest
:
You might need to configure the access credential. [1]
[1]
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/filesystems/s3/#configure-access-credentials
Best,
Yangze Guo
On Wed, Sep 22, 2021 at 2:17 PM Dhiru wrote:
>
>
> i see org.apache.hadoop.fs.FileSyst
Wednesday, September
22, 2021, 01:39:04 AM EDT, Dhiru wrote:
flink image I have added both s3 plugin FROM flink:1.11.3-scala_2.12-java11RUN
mkdir ./plugins/flink-s3-fs-prestoRUN cp ./opt/flink-s3-fs-presto-1.11.3.jar
./plugins/flink-s3-fs-presto/RUN mkdir ./plugins/flink-s3-fs-hadoopRUN cp
flink image I have added both s3 plugin FROM flink:1.11.3-scala_2.12-java11RUN
mkdir ./plugins/flink-s3-fs-prestoRUN cp ./opt/flink-s3-fs-presto-1.11.3.jar
./plugins/flink-s3-fs-presto/RUN mkdir ./plugins/flink-s3-fs-hadoopRUN cp
./opt/flink-s3-fs-hadoop-1.11.3.jar ./plugins/flink-s3-fs-hadoop
datastream / Table / SQL code here?
Dhiru 于2021年9月14日周二 上午3:49写道:
I am not sure when we try to receive data from Apache Kafka I get this error ,
but works good for me when I try to run via Conflunece kafka
java.lang.ClassCastException: class java.lang.String cannot be cast to class
scala.Product
I am not sure when we try to receive data from Apache Kafka I get this error ,
but works good for me when I try to run via Conflunece kafka
java.lang.ClassCastException: class java.lang.String cannot be cast to class
scala.Product (java.lang.String is in module java.base of loader 'bootstrap';
-docs-release-1.13/docs/deployment/filesystems/plugins/
On 08/09/2021 17:10, Dhiru wrote:
yes I copied to plugin folder but not sure same jar I see in /opt as well
by default
root@d852f125da1f:/opt/flink/plugins# ls README.txt
flink-s3-fs-hadoop-1.13.1.jar metrics
-1.13.1.jar metrics-graphite metrics-jmx metrics-slf4j
I need help sooner on this
On Wednesday, September 8, 2021, 09:26:46 AM EDT, Dhiru
wrote:
yes I copied to plugin folder but not sure same jar I see in /opt as well by
default
root@d852f125da1f:/opt/flink/plugins
Need to configure aws S3 getting this error
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find
a file system implementation for scheme 's3'. The scheme is directly supported
by Flink through the following plugins: flink-s3-fs-hadoop, flink-s3-fs-presto.
Please ensur
developer tools of you browser to check what response
the UI receives when attempting to upload the jar?
On 20/08/2021 07:55, Dhiru wrote:
hello all ,
I was able to run sample example and was able to upload jar using UI,
cluster which I have deployed on k8s
Today I had to reboot
hello all ,
I was able to run sample example and was able to upload jar using UI,
cluster which I have deployed on k8s
Today I had to reboot jobmanager after that I am not able to upload any jar to
my cluster. Do not see any log as well to debug
any help
--kumar
hello all , I read some article , I think many company using flink operator
is using separate cluster for each job and this can be achieved using
flinkk8soperator ? Please can you help me sharing some pointer video /git link
which can help me for installing on AWS- EKS and I have zookeeper/
hi , I am very new to flink , I am planning to install Flink HA setup on eks
cluster with 5 worker nodes . Please can some one point me to right materials
or direction how to install as well as any sample job which I can run only for
testing and confirm all things are working as expected .
--
ahoo.com by
sonic309.consmr.mail.bf2.yahoo.com with HTTP; Thu, 22 Jul 2021 03:22:05 +
Date: Thu, 22 Jul 2021 03:22:03 + (UTC)
From: Dhiru
To: "user-subscr...@flink.apache.org"
Message-ID: <362567283.31384.1626924123...@mail.yahoo.com>
Subject: subscribe to flink mailing list
26 matches
Mail list logo