Hi,

We have a Flink Streaming application that uses S3 for storing checkpoints. We 
are not using 'regular' S3, but rather IBM Object Storage which has an 
S3-compatible connector. We had quite some challenges in overiding the endpoint 
from the default s3.amnazonaws.com to our internal IBM Object Storage endpoint. 
In 1.3.2, we managed to get this working by providing our own jets3t.properties 
file that overrode s3service.s3-endpoint 
(https://jets3t.s3.amazonaws.com/toolkit/configuration.html)

When upgrading to 1.4.0, we added dependency to the flink-s3-fs-hadoop 
artifact. Seems that our overriding with jets3t.properties is no longer 
relevant since does not use the Hadoop implementation anymore. 

Is there a way to overide this default endpoint, or with the presto endpoint 
can we use this? Please note that if we provide the endpoint in the URL for the 
state backend, it simply appends s3.amazonaws.com to the url. For example 
s3://myobjectstorageendpoint.s3.amazonaws.com.

Are there any other solutions such as to 'rollback' to the Hadoop 
implementation of S3?

Thanks,
Hayden

Reply via email to