On Sun, May 24, 2015 at 4:26 AM, Arup Rakshit
<arupraks...@rocketmail.com> wrote:
> Hi,
>
> I am copying the data from a CSV file to a Table using "COPY" command. But 
> one thing that I got stuck, is how to skip duplicate records while copying 
> from CSV to tables. By looking at the documentation, it seems, Postgresql 
> don't have any inbuilt too to handle this with "copy" command. By doing 
> Google I got below 1 idea to use temp table.
>
> http://stackoverflow.com/questions/13947327/to-ignore-duplicate-keys-during-copy-from-in-postgresql
>
> I am also thinking what if I let the records get inserted, and then delete 
> the duplicate records from table as this post suggested - 
> http://www.postgresql.org/message-id/37013500.dff0a...@manhattanproject.com.
>
> Both of the solution looks like doing double work. But I am not sure which is 
> the best solution here. Can anybody suggest which approach should I adopt ? 
> Or if any better ideas you guys have on this task, please share.

Have you looked at pg_loader?
http://pgloader.io/index.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to