Well, with any database, if I had to insert 20 000 000 record in a table, I wouldntt do it in one transaction, it makes very big intermediate file, and the commit at the end is really heavy. I would cut the transaction in midi-transaction, of let's say 1000 records.
There is either not really more code, no trigger, no key, etc. Imagine something like this : FOR all IN (select * from TABLE1) LOOP FOR some IN (select * from) LOOP INSERT INTO TABLE2 VALUES (all.id, some.id) END LOOP END LOOP I with I could put a commit in the inside for !! -----Message d'origine----- De : Martijn van Oosterhout [mailto:[EMAIL PROTECTED] Envoyé : jeudi 24 mai 2007 16:48 À : Célestin HELLEU Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Very big transaction in a stored procedure : how can i commit in the middle of it ? On Thu, May 24, 2007 at 03:59:15PM +0200, Célestin HELLEU wrote: > Hi, > > I already know that transaction is impossible inside a function, but I think > I really need a way to counter this > > I have a stored procedure in pl/sql that makes about 2 000 000 > insert. With the way it works, PostGreSQL il making a unique > transaction with all this, resulting so bad performances I can't wait > the procedure to finish In general making seperate transactions slows things down, not speeds things up. Have you actually check what the cause of the slowness is? Are there any triggers, foreign key, etc defined. Is the query in the loop fast enough? You're going to have to provide more details. Have a nice day, -- Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. 2007 - Maporama International - Outgoing mail scanned by BlackSpider ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org/