On Fri, Aug 01, 2008 at 06:30:36PM +0200, Linos wrote:
> Hello,
>     i have migrated from Maxdb to Postgresql recently and i am having a 
>     speed problem in large transactions over slow links because of 
> autorollback 
> on error postgresql feature, i create data in any tables with triggers in 
> other tables and i do large inserts from the data created in this tables to 
> any other postgresql servers (replication purposes), for this example maybe 
> we can say 20000 rows, i want do this in a transaction to make rollback on 
> certain errors, but i use a fallback feature if a duplicated is found i 
> relaunch the last insert data in a update to the existing row, so i have to 
> set savepoint and release after the insert has been successful, so my 
> traffic flow is anything like this.

If the goal is to reduce latency costs, the best way could be:

1. Use COPY to transfer all the data in one stream to the server into a
temporary table.
2. Use an UPDATE and and INSERT to merge the table into the old one.
SQL has a MERGE statement but postgresql doesn't support that, so
you'll have to do it by hand.

That would be a total of 5 round-trips, including transaction start/end.

hope this helps,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.

Attachment: signature.asc
Description: Digital signature

Reply via email to