Hi, question on this page:
"You need to point Flink to a valid Hadoop 
configuration..."https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html#s3-simple-storage-service
How do you point Flink to the Hadoop config?
    On Saturday, January 13, 2018, 4:56:15 AM PST, Till Rohrmann 
<trohrm...@apache.org> wrote:  
 
 Hi,

the flink-connector-filesystem contains the BucketingSink which is a
connector with which you can write your data to a file system. It provides
exactly once processing guarantees and allows to write data to different
buckets [1].

The flink-filesystem module contains different file system implementations
(like mapr fs, hdfs or s3). If you want to use, for example, s3 file
system, then there is the flink-s3-fs-hadoop and flink-s3-fs-presto module.

So if you want to write your data to s3 using the BucketingSink, then you
have to add flink-connector-filesystem for the BucketingSink as well as a
s3 file system implementations (e.g. flink-s3-fs-hadoop or
flink-s3-fs-presto).

Usually, there should be no need to change Flink's filesystem
implementations. If you want to add a new connector, then this would go to
flink-connectors or to Apache Bahir [2].

[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/filesystem_sink.html

[2]
https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/index.html#connectors-in-apache-bahir

Cheers,
Till

On Fri, Jan 12, 2018 at 7:22 PM, cw7k <c...@yahoo.com.invalid> wrote:

> Hi, I'm trying to understand the difference between the flink-filesystem
> and flink-connector-filesystem.  How is each intended to be used?
> If adding support for a different storage provider that supports HDFS,
> should additions be made to one or the other, or both?  Thanks.
  

Reply via email to