On Wednesday 03 May 2006 16:12, Larry Rosenman wrote:
>Javier de la Torre wrote:
>> It is inserts.
>>
>> I create the inserts myself with a Python programmed I hace created
>> to migrate MySQL databases to PostgreSQL (by th way if someone wants
>> it...)
>
>Ok, that makes *EACH* insert a transaction, with all the overhead.
>
>You need to batch the inserts between BEGIN;/COMMIT; pairs, or, better
>yet set it up as a COPY.

I'm using essentially the same approach for my custom backup/restore 
procedure. I also found it a very slow process. But when I wrapped up 
each table script (ie. 20-30k of INSERTs) the time it took to populate 
the entire database went down from about half an hour to 50 seconds. 
Very impressive ;-)

However, I'm wondering if there's a practical limit to how many rows you 
can insert within one transaction?
-- 
Leif Biberg Kristensen :: Registered Linux User #338009
http://solumslekt.org/ :: Cruising with Gentoo/KDE

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to