Previously, I am inserting a bulk of data by using the following way.

for each item
    update item into table
    if row count is 0
        insert items into table

I realize I am suffering performance problem.

Later, I realize it is much fast by using 

COPY

I am very happy with the speed.

Later, I realize that COPY doesn't work well, if I already have a row with same 
unique key. What I did is

# Try to remove old rows first
delete row where <condition>
# And perform really fast insertion
COPY

I was wondering, is this a common technique being use for fast bulk data 
insertion? Is there other techniques.

Thanks and Regards
Yan Cheng CHEOK


      

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to