[ https://issues.apache.org/jira/browse/FLINK-19522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Till Rohrmann updated FLINK-19522: ---------------------------------- Component/s: Table SQL / API > Add ability to set auto commit on jdbc driver from Table/SQL API > ---------------------------------------------------------------- > > Key: FLINK-19522 > URL: https://issues.apache.org/jira/browse/FLINK-19522 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC, Table SQL / API > Affects Versions: 1.11.2 > Reporter: Dylan Forciea > Priority: Major > Attachments: Screen Shot 2020-10-01 at 5.03.24 PM.png, Screen Shot > 2020-10-01 at 5.03.31 PM.png > > > When I tried to stream data from postgres via the JDBC source connector in > the SQL api, it was loading the entirety of the table into memory before > starting streaming. This is due to the postgres JDBC driver requiring the > autoCommit flag to be set to true for streaming to take place. > FLINK-12198 provided the means to do this with the JDBCInputSource, but this > did not extend to the SQL description. This option should be added. > To reproduce, create a very large table and try to read it in with the SQL > api. You will see a large spike of memory usage and no data streaming, and > then it will start all at once. I will attach a couple of graphs before and > after I made a patch to the code myself to set auto-commit. -- This message was sent by Atlassian Jira (v8.3.4#803005)