Hi Zoltan,

is there any particular reason why you don't do a bulk insert as:
   insert into target_table
   select ... from source_table(s) (with joins etc)

Regards,
Milos



On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <
szalontai.zol...@t-online.hu> wrote:

> Hi,
>
>
>
> We have a Class db.t2.medium database on AWS.
>
> We use a procedure to transfer data records from the Source to the Target
> Schema.
>
> Transfers are identified by the log_id field in the target table.
>
>
>
> The procedure is:
>
> 1 all records are deleted from the Target table with the actual log_id
> value
>
> 2 a complicated SELECT (numerous tables are joined) is created on the
> Source system
>
> 3 a cursor is defined based on this SELECT
>
> 4 we go trough the CURSOR and insert new records into the Target table
> with this log_id
>
>
>
> (Actually we have about 100 tables in the Target schema and the size of
> the database backup file is about 1GByte. But we do the same for all the
> Target tables.)
>
>
>
> Our procedure is extremely slow for the first run: 3 days for the 100
> tables. For the second and all subsequent run it is fast enough (15
> minutes).
>
> The only difference between the first run and all the others is that in
> the first run there are no records in the Target schema with this log_id.
>
>
>
> It seems, that in the first step the DELETE operation makes free some
> “space”, and the INSET operation in the 4. step can reuse this space. But
> if no records are deleted in the first step, the procedure is extremely
> slow.
>
>
>
> To speed up the first run we found the following workaround:
>
> We inserted dummy records into the Target tables with the proper log_id,
> and really the first run became very fast again.
>
>
>
> Is there any “normal” way to speed up this procedure?
>
> In the production environment there will be only “first runs”, the same
> log_id will never be used again.
>
>
>
>
>
> thank
>
> Zoltán
>
>
>
>
>


-- 
Milos Babic
http://www.linkedin.com/in/milosbabic

Reply via email to