On Tue, Jan 14, 2014 at 3:07 AM, Heikki Linnakangas <hlinnakan...@vmware.com> wrote: >> Right, but with your approach, can you really be sure that you have >> the right rejecting tuple ctid (not reject)? In other words, as you >> wait for the exclusion constraint to conclusively indicate that there >> is a conflict, minutes may have passed in which time other conflicts >> may emerge in earlier unique indexes. Whereas with an approach where >> values are locked, you are guaranteed that earlier unique indexes have >> no conflicting values. Maintaining that property seems useful, since >> we check in a well-defined order, and we're still projecting a ctid. >> Unlike when row locking is involved, we can make no assumptions or >> generalizations around where conflicts will occur. Although that may >> also be a general concern with your approach when row locking, for >> multi-master replication use-cases. There may be some value in knowing >> it cannot have been earlier unique indexes (and so the existing values >> for those unique indexes in the locked row should stay the same - >> don't many conflict resolution policies work that way?). > > I don't understand what you're saying. Can you give an example? > > In the use case I was envisioning above, ie. you insert N rows, and if any > of them violate constraint, you still want to insert the non-violating > instead of rolling back the whole transaction, you don't care. You don't > care what existing rows the new rows conflicted with. > > Even if you want to know what you conflicted with, I can't make sense of > what you're saying. In the btreelock approach, the value locks are > immediately released once you discover that there's conflict. So by the time > you get to do anything with the ctid of the existing tuple you conflicted > with, new conflicting tuples might've appeared.
That's true, but at least the timeframe in which an additional conflict may occur on just-locked index values in bound to more or less an instant. In any case how important this is is an interesting question, and perhaps one that Andres can weigh in on as someone that knows a lot about multi-master replication. This issue is particularly interesting because this testcase appears to make both patches livelock, for reasons that I believe are related: https://github.com/petergeoghegan/upsert/blob/master/torture.sh I have an idea of what I could do to fix this, but I don't have time to make sure that my hunch is correct. I'm travelling tomorrow to give a talk at PDX pug, so I'll have limited access to e-mail. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers