[ https://issues.apache.org/jira/browse/FLINK-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17342087#comment-17342087 ]
Galen Warren commented on FLINK-19481: -------------------------------------- I wanted to check in here. Should I wait until this question is resolved before proceeding with the PR? Personally, my preference would be to see Flink HadoopFileSystem + GoogleHadoopFileSystem as at least _an_ option for the file system implementation, just because those components seem to be well established. I'm not opposed to an alternate implementation, though, i.e. as has been done with S3. If that's the path we're going down, it might mean some changes for the code in the PR I'm working on, hence the question. > Add support for a flink native GCS FileSystem > --------------------------------------------- > > Key: FLINK-19481 > URL: https://issues.apache.org/jira/browse/FLINK-19481 > Project: Flink > Issue Type: Improvement > Components: Connectors / FileSystem, FileSystems > Affects Versions: 1.12.0 > Reporter: Ben Augarten > Priority: Minor > Labels: auto-deprioritized-major > > Currently, GCS is supported but only by using the hadoop connector[1] > > The objective of this improvement is to add support for checkpointing to > Google Cloud Storage with the Flink File System, > > This would allow the `gs://` scheme to be used for savepointing and > checkpointing. Long term, it would be nice if we could use the GCS FileSystem > as a source and sink in flink jobs as well. > > Long term, I hope that implementing a flink native GCS FileSystem will > simplify usage of GCS because the hadoop FileSystem ends up bringing in many > unshaded dependencies. > > [1] > [https://github.com/GoogleCloudDataproc/hadoop-connectors|https://github.com/GoogleCloudDataproc/hadoop-connectors)] -- This message was sent by Atlassian Jira (v8.3.4#803005)