On Sat, Oct 23, 2010 at 4:03 PM, Josh Berkus <j...@agliodbs.com> wrote: > I think that such a lock would also be useful for improving the FK deadlock > issues we have.
I don't see how. I think the problem you're referring to occurs when different plans update rows in different orders and the resulting locks on foreign key targets are taken in different orders. In which case the problem isn't that we're unable to lock the resources -- they're locked using regular row locks -- but rather that there's nothing controlling the ordering of the locks. I don't think it would be acceptable to hold low level btree page locks across multiple independent row operations on different rows and I don't see how they would be any better than the row locks we have now. Worse, the resulting deadlock would no longer be detectable (one of the reasons it wouldn't be acceptable to hold the lock for so long). That does point out a problem with the logic I sketched. If you go to do an update and find there's an uncommitted update pending you have to wait on it. You can't do that while holding the index page lock. I assume then you release it and wait on the uncommitted transaction and when it's finished you start over doing the btree lookup and reacquiring the lock. I haven't thought it through in detail though. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers