Hi Konstantinos,

Seems like setting for auto commit is not directly possible in the current
JDBCInputFormatBuilder.
However there's a way to specify the fetch size [1] for your DB round-trip,
doesn't that resolve your issue?

Similarly in JDBCOutputFormat, a batching mode was also used to stash
upload rows before flushing to DB.

--
Rong

[1]
https://docs.oracle.com/cd/E18283_01/java.112/e16548/resltset.htm#insertedID4

On Fri, Apr 12, 2019 at 6:23 AM Papadopoulos, Konstantinos <
konstantinos.papadopou...@iriworldwide.com> wrote:

> Hi all,
>
> We are facing an issue when trying to integrate PostgreSQL with Flink
> JDBC. When you establish a connection to the PostgreSQL database, it is
> in auto-commit mode. It means that each SQL statement is treated as a
> transaction and is automatically committed, but this functionality results
> in unexpected behavior (e.g., out-of-memory errors) when executed for large
> result sets. In order to bypass such issues, we must disable the
> auto-commit mode. To do this, in a simple Java application, we call the
> setAutoCommit() method of the Connection object.
>
> So, my question is: How can we achieve this by using JDBCInputFormat of
> Flink?
>
> Thanks in advance,
>
> Konstantinos
>
>

Reply via email to