I  have a text file that contains 200k rows.   These rows are to be imported
into our database.  The majority of them will already exists while a few are
new.  Here are a few options I've tried:

I've had php cycle through the file row by row and if the row is there,
delete it and do a straight insert but that took a while.

Now I have php get the row from the text file and then to array_combine with
a default array I have in the class so I can have key value pairs.  I then
take that generated array and do array_diff to the data array I pulled from
the db and I then have the columns that are different so I do an update on
only those columns for that specific row.  This is slow and after about
180,000 rows, php throws a memory error.  I'm resetting all my vars to NULL
at each iteration so am not sure what's up.


Anyone have a better way to do this?  In MySQL, I could simply a replace on
each row...but not in postgres.

Thanks!

Reply via email to