Tom Lane <t...@sss.pgh.pa.us> wrote:
> rl...@pablowe.net writes:
>> There is an edge case where Postgres will return an error to the
>> client, but commit the operation anyway, thus breaking the
>> contract with the client:
> 
>> postgres=# begin; insert into s.t (c) values (1);
>> <postgres process core dumps (note that the forked processes were
>> not affected)>
>> postgres=# commit;
>> <this returns an error to the application>
>> <postgress process restarts>
>> The data is now on both the master and slave, but the application
>> believes that the transaction was rolled back.  A well-behaved
>> application will dutifully retry the transaction because it
>> *should* have rolled back.  However, the data is there so we'll
>> have had the transaction execute *twice*.
> 
> Huh?  If the backend dumped core before you sent it the commit,
> the data will certainly not be committed.  It might be physically
> present on disk, but it won't be considered valid.
 
I suppose that there is a window of time between the commit becoming
effective and the return to the application which, from a user
perspective, would be hard to distinguish from what the OP
described.  I don't really see how that can be avoided, though,
short of a transaction manager using 2PC.  Any time the server
crashes while a COMMIT is pending, one must check to see whether it
"took".
 
As long as the client application sees the connection as dead in
this situation, I think PostgreSQL is doing everything just as it
should.
 
-Kevin

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to