Yes,

Thanks. I am doing this now...

Is definetly faster, but I will also discover now if there is a limit
in a transaction side... I am going to try to insert into one single
transaction 60 million records in a table.

In any case I still don't understand how why PostgreSQL was not taking
resources before without the transaction. If it has to create a
transaction per insert I understand it will have to do more things,
but why is not taking all resources from the machine? I mean, why is
it only taking 3% of them.

Javier.

On 5/3/06, Leif B. Kristensen <[EMAIL PROTECTED]> wrote:
On Wednesday 03 May 2006 16:12, Larry Rosenman wrote:
>Javier de la Torre wrote:
>> It is inserts.
>>
>> I create the inserts myself with a Python programmed I hace created
>> to migrate MySQL databases to PostgreSQL (by th way if someone wants
>> it...)
>
>Ok, that makes *EACH* insert a transaction, with all the overhead.
>
>You need to batch the inserts between BEGIN;/COMMIT; pairs, or, better
>yet set it up as a COPY.

I'm using essentially the same approach for my custom backup/restore
procedure. I also found it a very slow process. But when I wrapped up
each table script (ie. 20-30k of INSERTs) the time it took to populate
the entire database went down from about half an hour to 50 seconds.
Very impressive ;-)

However, I'm wondering if there's a practical limit to how many rows you
can insert within one transaction?
--
Leif Biberg Kristensen :: Registered Linux User #338009
http://solumslekt.org/ :: Cruising with Gentoo/KDE

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match


---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to