We have been using the model of updating certain data in a table of

begin;
delete from foo where bar='myfoo';
insert into foo all of the correct data for myfoo;
commit;

Our thinking was that if you had two running at close to the same time, the
first transaction would finish and then the second one would run making the
table consistent.  However this is not the case.  The second transaction is
not deleting anything and we are getting double the inserted data.  Our
guess is that the second transaction is trying to delete the data from the
start of the transaction, not from when the lock on the table is released,
and that data is already gone from the first transaction.

Is there a way to make the delete re-plan to see the data inserted by the
first transaction when the delete is allowed to continue?

The following is what I was using as a test case;

CREATE TABLE woody ( id serial,
         constraint woody_pkey primary key (id),
         mynumber int4,
         myname      varchar
         );
CREATE INDEX myname_INDEX ON woody (myname);

INSERT INTO woody (mynumber, myname) SELECT generate_series(1, 1000),
'woody';

I then placed the following into a file called woody2.sql
BEGIN;
DELETE from woody WHERE myname='woody';
INSERT INTO woody (mynumber, myname) SELECT generate_series(1, 1000),
'woody';

connection1
iss=> \i woody2.sql
BEGIN
DELETE 1000
INSERT 0 1000

connection2
iss=> \i woody2.sql
BEGIN

connection1
iss=> commit;
COMMIT

connection2
DELETE 0
INSERT 0 1000
iss=> commit;
COMMIT
iss=> select count(*) from woody where myname='woody';
 count
-------
  2000
(1 row)


Thanks,
George
iGLASS Networks
www.iglass.net

Reply via email to