[ 
https://issues.apache.org/jira/browse/FLINK-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16041825#comment-16041825
 ] 

ASF GitHub Bot commented on FLINK-6281:
---------------------------------------

Github user haohui commented on a diff in the pull request:

    https://github.com/apache/flink/pull/3712#discussion_r120765156
  
    --- Diff: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCOutputFormat.java
 ---
    @@ -202,14 +202,20 @@ public void writeRecord(Row row) throws IOException {
                        upload.addBatch();
                        batchCount++;
                        if (batchCount >= batchInterval) {
    -                           upload.executeBatch();
    -                           batchCount = 0;
    +                           flush();
                        }
                } catch (SQLException | IllegalArgumentException e) {
                        throw new IllegalArgumentException("writeRecord() 
failed", e);
                }
        }
     
    +   void flush() throws SQLException {
    +           if (upload != null) {
    +                   upload.executeBatch();
    --- End diff --
    
    It is a synchronous call. It will throw `SQLException` and abort the sink. 
The behavior has not been changed.


> Create TableSink for JDBC
> -------------------------
>
>                 Key: FLINK-6281
>                 URL: https://issues.apache.org/jira/browse/FLINK-6281
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table API & SQL
>            Reporter: Haohui Mai
>            Assignee: Haohui Mai
>
> It would be nice to integrate the table APIs with the JDBC connectors so that 
> the rows in the tables can be directly pushed into JDBC.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to