[ https://issues.apache.org/jira/browse/FLINK-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16111305#comment-16111305 ]
ASF GitHub Bot commented on FLINK-7221: --------------------------------------- Github user kgeis commented on the issue: https://github.com/apache/flink/pull/4459 (I filed FLINK-7221). Thanks for putting this together. I've reviewed the change and have no criticism. > JDBCOutputFormat swallows errors on last batch > ---------------------------------------------- > > Key: FLINK-7221 > URL: https://issues.apache.org/jira/browse/FLINK-7221 > Project: Flink > Issue Type: Bug > Components: Batch Connectors and Input/Output Formats > Affects Versions: 1.3.1 > Environment: Java 1.8.0_131, PostgreSQL driver 42.1.3 > Reporter: Ken Geis > Assignee: Fabian Hueske > > I have a data set with ~17000 rows that I was trying to write to a PostgreSQL > table that I did not (yet) have permission on. No data was loaded, and Flink > did not report any problem outputting the data set. The only indication I > found of my problem was in the PostgreSQL log. > With the default parallelism (8) and the default batch interval (5000), my > batches were ~2000 rows each, so they were never executed in > {{JDBCOutputFormat.writeRecord(..)}}. {{JDBCOutputFormat.close()}} does a > final call on {{upload.executeBatch()}}, but if there is a problem, it is > logged at INFO level and not rethrown. > If I decrease the batch interval to 100 or 1000, then an error is properly > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029)