=?iso-8859-1?Q?C=E9lestin_HELLEU?= <[EMAIL PROTECTED]> writes: > Well, with any database, if I had to insert 20 000 000 record in a table, I= > wouldntt do it in one transaction, it makes very big intermediate file, an= > d the commit at the end is really heavy.
There may be some databases where the above is correct thinking, but Postgres isn't one of them. The time to do COMMIT, per se, is independent of the number of rows inserted. You need to find out where your bottleneck actually is, without any preconceptions inherited from some other database. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match