Robert Haas <robertmh...@gmail.com> wrote:
 
> rhaas=# create table concurrent (x integer primary key);
> NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
> "concurrent_pkey" for table "concurrent"
> CREATE TABLE
> rhaas=# insert into x values (1);
> rhaas=# begin;
> BEGIN
> rhaas=# insert into concurrent values (2);
> INSERT 0 1
> 
> <switch to a different window>
> 
> rhaas=# update concurrent set x=x where x=2;
> UPDATE 0
 
That surprised me.  I would have thought that the INSERT would have
created an "in doubt" tuple which would block the UPDATE.  What is
the reason for not doing so?
 
FWIW I did a quick test and REPEATABLE READ also lets this pass but
with the SSI patch SERIALIZABLE seems to cover this correctly,
generating a serialization failure where such access is involved in
write skew:
 
test=# create table concurrent (x integer primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"concurrent_pkey" for table "concurrent"
CREATE TABLE
test=# insert into concurrent select generate_series(1, 20000);
INSERT 0 20000
test=# begin isolation level serializable;
BEGIN
test=# insert into concurrent values (0);
INSERT 0 1
test=# update concurrent set x = 30001 where x = 30000;
UPDATE 0
 
<different session>
 
test=# begin isolation level serializable;
BEGIN
test=# insert into concurrent values (30000);
INSERT 0 1
test=# update concurrent set x = -1 where x = 0;
UPDATE 0
test=# commit;
ERROR:  could not serialize access due to read/write dependencies
among transactions
HINT:  The transaction might succeed if retried.
 
I'll need to add a test to cover this, because it might have broken
with one of the optimizations on my list, had you not point out this
behavior.
 
On the other hand:
 
<session 1>
 
test=# drop table concurrent ;
DROP TABLE
test=# create table concurrent (x integer primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"concurrent_pkey" for table "concurrent"
CREATE TABLE
test=# insert into concurrent select generate_series(1, 20000);
INSERT 0 20000
test=# begin isolation level serializable;
BEGIN
test=# insert into concurrent values (0);
INSERT 0 1
 
<session 2>
 
test=# begin isolation level serializable;
BEGIN
test=# select * from concurrent where x = 0;
 x
---
(0 rows)

test=# insert into concurrent values (0);
<blocks>
 
<session 1>
 
test=# commit;
COMMIT
 
<session 2>
 
ERROR:  duplicate key value violates unique constraint
"concurrent_pkey"
DETAIL:  Key (x)=(0) already exists.
 
Anyway, I thought this might be of interest in terms of the MERGE
patch concurrency issues, since the SSI patch has been mentioned.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to