Hi All,

I'm trying to migrate from NFS to S3 for checkpointing and I'm facing few
issues. I have flink running in docker with flink-s3-fs-hadoop jar copied
to plugins folder. Even after having the jar I'm getting the following
error: Caused by:
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is
not in the classpath/dependencies. Am I missing something?

In the documentation it says "Presto is the recommended file system for
checkpointing to S3". How can I enable this? Is there a specific
configuration that I need to do for this?

Also, I couldn't figure out how the entropy injection works. Should I
create the bucket with checkpoints folder and flink will automatically
inject an entropy and create a per job checkpoint folder or should I create
it?

bucket/checkpoints/_entropy_/dashboard-job/

s3.entropy.key: _entropy_
s3.entropy.length: 4 (default)

Thanks

Reply via email to