Thanks to all who responded. Using COPY instead of INSERT really solved the 
problem - the whole process took about 1h 20min on an indexed table, with 
constraints (which is close to our initial expectations). We're performing some 
additional tests now. I'll post some more observations when finished.
  ----- Original Message ----- 
  From: Márcio Geovani Jasinski 
  To: pgsql-general@postgresql.org 
  Sent: Friday, November 09, 2007 1:52 PM
  Subject: Re: INSERT performance deteriorates quickly during a large import


  Hello Krasimir,

  You got a lot of good advices above and I would like to add another one:

  d) Make sure of your PHP code is not recursive. As you said the memory is 
stable so I think your method is iterative. 
  A recursive method certainly will increase a little time for each insert 
using more memory.
  But iterative methods must be correctly to be called just once and maybe your 
code is running much more than need.

  Pay attention on Tomas advices, and after that (I agree with Cris) "there 
should be no reason for loading data to get more costly as
  the size of the table increases" - Please check your code.

  I did some experiences long time ago with 40000 data with a lot of BLOBs. I 
used PHP code using SELECT/INSERT from Postgres to Postgres and the time wasn't 
constant but wasn't so bad as your case.  (And I didn't the Tomas a, b and c 
advices) 

  Good Luck
  -- 
  Márcio Geovani Jasinski 

Reply via email to