Thanks. Let me clarify a bit more about my thinkings. Generally, I would
prefer we can concentrate the functionalities about connector, especially
some standard & most popular connectors, like kafka, different file
system with different formats, etc. We should make these core connectors
as powerful as we can, and can also prevent something badly from
happening, such as "if you want use this feature, please use connectorA.
But if you want use another feature, please use connectorB".
Best,
Kurt


On Tue, Sep 17, 2019 at 11:11 AM Jun Zhang <825875...@qq.com> wrote:

> Hi Kurt:
> thank you very much.
>         I will take a closer look at the FLIP-63.
>
>         I develop this PR, the underlying is StreamingFileSink, not
> BuckingSink, but I gave him a name, called Bucket.
>
>
> On 09/17/2019 10:57,Kurt Young<ykt...@gmail.com> <ykt...@gmail.com>
> wrote:
>
> Hi Jun,
>
> Thanks for bringing this up, in general I'm +1 on this feature. As
> you might know, there is another ongoing efforts about such kind
> of table sink, which covered in newly proposed partition support
> reworking[1]. In this proposal, we also want to introduce a new
> file system connector, which can not only cover the partition
> support, but also end-to-end exactly once in streaming mode.
>
> I would suggest we could combine these two efforts into one. The
> benefits would be save some review efforts, also reduce the core
> connector number to ease our maintaining effort in the future.
> What do you think?
>
> BTW, BucketingSink is already deprecated, I think we should refer
> to StreamingFileSink instead.
>
> Best,
> Kurt
>
> [1]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-63-Rework-table-partition-support-td32770.html
>
>
> On Tue, Sep 17, 2019 at 10:39 AM Jun Zhang <825875...@qq.com> wrote:
>
>> Hello everyone:
>> I am a user and fan of flink. I also want to join the flink community. I
>> contributed my first PR a few days ago. Can anyone help me to review my
>> code? If there is something wrong, hope I would be grateful if you can give
>> some advice.
>>
>> This PR is mainly in the process of development, I use sql to read data
>> from kafka and then write to hdfs, I found that there is no suitable
>> tablesink, I found the document and found that File System Connector is
>> only experimental (
>> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#file-system-connector),
>> so I wrote a Bucket File System Table Sink that supports writing stream
>> data. Hdfs, file file system, data format supports json, csv, parquet,
>> avro. Subsequently add other format support, such as protobuf, thrift, etc.
>>
>> In addition, I also added documentation, python api, units test,
>> end-end-test, sql-client, DDL, and compiled on travis.
>>
>> the issue is https://issues.apache.org/jira/browse/FLINK-12584
>> thank you very much
>>
>>
>>

Reply via email to