On Sat, Nov 23, 2013 at 11:52 PM, Peter Geoghegan <p...@heroku.com> wrote: > I have some concerns about what you've done that may limit my > immediate ability to judge performance, and the relative merits of > both approaches generally. Now, I know you just wanted to sketch > something out, and that's fine, but I'm only sharing my thoughts. I am > particularly worried about the worst case (for either approach), > particularly with more than 1 unique index. I am also worried about > livelock hazards (again, in particular with more than 1 index) - I am > not asserting that they exist in your patch, but they are definitely > more difficult to reason about. Value locking works because once a > page lock is acquired, all unique indexes are inserted into. Could you > have two upserters livelock each other with two unique indexes with > 1:1 correlated values in practice (i.e. 2 unique indexes that might > almost work as 1 composite index)? That is a reasonable usage of > upsert, I think.
So I had it backwards: In fact, it isn't possible to get your patch to deadlock when it should - it livelocks instead (where with my patch, as far as I can tell, we predictably and correctly have detected deadlocks). I see an infinite succession of "insertion conflicted after pre-check" DEBUG1 elog messages, and no progress, which is an obvious indication of livelock. My test does involve 2 unique indexes - that's generally the hard case to get right. Dozens of backends are tied-up in livelock. Test case for this is attached. My patch is considerably slowed down by the way this test-case tangles everything up, but does get through each pgbench run/loop in the bash script predictably enough. And when I kill the test-case, a bunch of backends are not left around, stuck in perpetual livelock (with my patch it takes only a few seconds for the deadlock detector to get around to killing every backend). I'm also seeing this: Client 45 aborted in state 2: ERROR: attempted to lock invisible tuple Client 55 aborted in state 2: ERROR: attempted to lock invisible tuple Client 41 aborted in state 2: ERROR: attempted to lock invisible tuple To me this seems like a problem with the (potential) total lack of locking that your approach takes (inserting btree unique index tuples as in your patch is a form of value locking...sort of...it's a little hard to reason about as presented). Do you think this might be an inherent problem, or can you suggest a way to make your approach still work? So I probably should have previously listed as a requirement for our design: * Doesn't just work with one unique index. Naming a unique index directly in DML, or assuming that the PK is intended seems quite weak to me. This is something I discussed plenty with Robert, and I guess I just forgot to repeat myself when asked. Thanks -- Peter Geoghegan
testcase.sh
Description: Bourne shell script
\set extent 10 * :scale \setrandom rec 1 :extent with rej as(insert into foo(a, b, c) values(:rec, :rec * random(), 'fd') on duplicate key lock for update returning rejects *) update foo set c = rej.c from rej where foo.a = rej.a;
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers