On Fri, 2005-11-11 at 20:22, Bruno Wolff III wrote: > On Fri, Nov 11, 2005 at 18:48:33 +0100, > Csaba Nagy <[EMAIL PROTECTED]> wrote: > > OK, I'm relatively new on this list, and I might have missed a few > > discussions on this topic. > > I wonder if doing it this way would not be better than using a table > > lock: > > > > - set a save point; > > - insert the row; > > - on error: > > - roll back to the save point; > > - update the row; > > - on success release the save point; > > > > This would provide less contention while paying the prise for the save > > point. In low contention scenarios the table lock would be better, and I > > wonder for high contention scenarios which is better, the table lock, or > > the save point version... > > You may not be able to update the row after the insert fails. If there is > insert occurring in another transaction, the row may not be visible to > the current transaction. In which case you can neither insert or update the > row. You need to wait for the other transaction to commit or rollback.
Are you sure ? From what I understand, the insert will only fail when the other transaction commits, and actively wait for the commit or rollback. Look at this: session_1=> create table test (col smallint primary key); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "test_pkey" for table "test" CREATE TABLE session_1=> begin; BEGIN cnagy=> insert into test values (1); INSERT 165068987 1 session_2=> begin; BEGIN session_2=> insert into test values (1); [session_2 is now waiting] session_1=> commit; COMMIT [session_2 wakes up] ERROR: duplicate key violates unique constraint "test_pkey" So it looks like predicate locking is already in place for primary key conditions... Cheers, Csaba. ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend