[ 
https://issues.apache.org/jira/browse/FLINK-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270677#comment-15270677
 ] 

ASF GitHub Bot commented on FLINK-1996:
---------------------------------------

Github user yjshen commented on the pull request:

    https://github.com/apache/flink/pull/1961#issuecomment-216877262
  
    Hi @fhueske , thanks for the explanation. If I get it correctly, the 
current `toSink` API is a general one to allow write table content to a large 
variety of `Sinks` without blow up `flink-table` modules dependencies, the 
design seems quite reasonable for me now.
    
    BTW, if we are going to support some **native** output format, the 
`register` & `reflection` seems a feasible approach, by doing this, we can not 
only do 
    ``` scala
    t.toSink("csv").option("path", "/foo").option("fieldDelim", "|")
    ``` 
    in Table API but also 
    
    ``` sql
    insert overwrite into parquet_table_a select * from table_b
    ```
    in SQL.


> Add output methods to Table API
> -------------------------------
>
>                 Key: FLINK-1996
>                 URL: https://issues.apache.org/jira/browse/FLINK-1996
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table API
>    Affects Versions: 0.9
>            Reporter: Fabian Hueske
>            Assignee: Fabian Hueske
>
> Tables need to be converted to DataSets (or DataStreams) to write them out. 
> It would be good to have a way to emit Table results directly for example to 
> print, CSV, JDBC, HBase, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to