Hi,

We are using Ceph buckets to store the checkpoints and savepoints, and the
access is done via the S3 protocol. Since we don't have any integration
with Hadoop, we added a dependency on flink-s3-fs-presto.

Our Flink configuration looks like this:


state.checkpoint-storage: filesystemstate.checkpoints.dir:
s3://my-bucket/flink_checkpoints/checkpointshigh-availability:
org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactoryhigh-availability.storageDir:
s3://my-bucket/flink_ha_storages3.endpoint:
"https://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local:443"s3.path-style-access:
"true"s3.access-key: "my-access-key-id"s3.secret-key:
"my-secret-access-key"

However, we encounter the following error when trying to write a checkpoint:

java.util.concurrent.CompletionException:
com.amazonaws.SdkClientException: Unable to execute HTTP request: PKIX
path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target

When trying to connect without SSL and using the following endpoint:

s3.endpoint: "
http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local:80
"

It works without issues.

I have seen different solutions that involve creating a new Flink image on
top of the community one, but I would prefer to avoid this. If you have
encountered this issue, I would appreciate any suggestions on the best
practice to solve this.

Thanks

Sigalit

Reply via email to