On Mon, Mar 8, 2010 at 5:49 AM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau <kamaual...@gmail.com> wrote:
>> Hi,
>> I am looking for an efficient and effective solution to eliminate
>> duplicates in a continuously updated "cumulative" transaction table
>> (no deletions are envisioned as all non-redundant records are
>> important). Below is my situation.
>
> Is there a reason you can't use a unique index and detect failed
> inserts and reject them?
>

I think it would have been possible make use of a unique index as you
have suggested, and silently trap the uniqueness violation.

But in my case (as pointed out in my previous lengthy mail) I am
inserting multiple records at once, which implicitly means a single
transaction. I think in this scenario a violation of uniqueness by
even a single record will lead to all the other records (in this
batch) being rejected either.

Is there perhaps a way to only single out the unique constraint
violating record(s) without having to perform individual record
inserts, I am following the example found here
"http://www.postgresql.org/docs/8.4/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING";.

Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to