Hello, I'm in the process of porting a large application from Ingres to PostgreSQL. We make heavy use of bulkloading using the 'COPY' statement in ESQL/C. Consider the SQL statements below (in a psql session on an arbitrary database):
CREATE TABLE copytest(f1 INTEGER, f2 INTEGER); CREATE UNIQUE INDEX copytest_idx ON copytest USING BTREE(f1, f2); COPY copytest FROM '/tmp/copytest'; Given the file /tmp/copytest: 1 1 2 2 3 3 4 4 4 4 5 5 6 6 will result in the following output: ERROR: copy: line 5, Cannot insert a duplicate key into unique index copytest_idx However my application code is assuming that duplicate rows will simply be ignored (this is the case in Ingres, and I believe Oracle's bulkloader too). I propose modifying _bt_check_unique() in /backend/access/nbtree/nbtinsert.c to emit a NOTICE (rather than ERROR) elog() and return NULL (or appropriate) to the calling function if a duplicate key is detected and a 'COPY FROM' is in progress (add new parameter to flag this). Would this seem a reasonable thing to do? Does anyone rely on COPY FROM causing an ERROR on duplicate input? Would: WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar) need to be added to the COPY command (I hope not)? Thanks, -- Lee Kindness, Senior Software Engineer Concept Systems Limited. ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org