Hello,
I have successfully been able to store data on S3 bucket. Earlier, I used
to have a similar issue. What you need to confirm:
1. S3 bucket is created with RW access(irrespective if it is minio or AWS
S3)
2. "flink/opt/flink-s3-fs-presto-1.14.0.jar" jar is copied to plugin
directory of "flink
s3a with hadoop s3 filesystem works fine for us wit sts assume role
credentials and with kms.
Below are how our hadoop s3a configs look like. Since the endpoint is
globally whitelisted, we don't explicitly mention the endpoint.
fs.s3a.aws.credentials.provider:
org.apache.hadoop.fs.s3a.auth.Assumed
Hi Vamshi,
>From your configuration I'm guessing that you're using Amazon S3 (not any
implementation such as Minio).
Two comments:
- *s3.endpoint* should not contain bucket (this is included in your s3
path, eg. *s3:///*)
- "*s3.path.style.access*: true" is only correct for 3rd party
implementati
We are using Flink version 1.13.0 on Kubernetes.
For checkpointing we have configured fs.s3 flink-s3-fs-presto.
We have enabled sse on our buckets with kms cmk.
flink-conf.yaml is configured as below.
s3.entropy.key: _entropy_
s3.entropy.length: 4
s3.path.style.access: true
s3.ssl.enabled: true
s3