-----Oprindelig meddelelse-----
>Fra: Laurenz Albe <laurenz.a...@cybertec.at> 
>Sendt: 20. september 2021 15:18
>Til: Niels Jespersen <n...@dst.dk>; pgsql-gene...@postgresql.org
>Emne: Re: Possibilities for optimizing inserts across oracle_fdw foreign data 
>wrapper
>
>On Sun, 2021-09-19 at 10:28 +0000, Niels Jespersen wrote:
>> We are often using the oracle_fdw to transfer data between Postgres (version 
>> 11+) and Oracle (version 18+). It works great.
>>  
>> However I have a task at hand that requires inserting a few billion rows in 
>> an Oracle table from a Postgres query.
>> If I could make the Oracle insert direct load, that would usually also 
>> increase throughput.
> But, is that possible here. There are no constraints defined on the 
> destinaton tables.
>
>The cause of the bad performance for bulk data modifications is that the FDW 
>API is built that way: each row INSERTed means a round trip between PostgreSQL 
>and Oracle.
>
>That could be improved by collecting rows and inserting them in bulk on the 
>Oracle side, but I don't feel like implementing that and complicating the 
>code.>
>
>From my point of view, oracle_fdw is good for reading, but not for bulk writes.
>
>Yours,
>Laurenz Albe

Thank you for clairifying. I will use a python script for doing this, then. 
Read large chunks of data from Postgres and insert equally large chunks into 
Oracle, possibly using direct load. I know this can work. 

Regards Niels

Reply via email to