On Thu, May 22, 2008 at 4:20 PM, Bill Moran <[EMAIL PROTECTED]> wrote:
> > Please don't top-post. I've attempted to reconstruct the conversation > flow. > > In response to "antiochus antiochus" <[EMAIL PROTECTED]>: > > > > On Thu, May 22, 2008 at 2:57 PM, Bill Moran < > [EMAIL PROTECTED]> > > wrote: > > > > > In response to "antiochus antiochus" <[EMAIL PROTECTED]>: > > > > > > > > I have a deadlock situation, two transactions waiting on each other > to > > > > complete. Based on the details below, would anyone have > recommendations > > > for > > > > me, please? > > > > > > I have a theory on deadlocks, and that theory is that it's damn near > > > impossible to track them all down, so your best bet is to wrap all > > > SQL calls in a function that detects deadlock and sleep/retries. > > > > One possibility might then seem to do something like: > > > > update table tt where ID in (select ID from tt where ... order by ID > asc > > for update); > > > > but unfortunately 'for update' is not allows in subqueries. Therefore, > one > > could do: > > > > select ID from tt where ... order by ID asc for update; > > update table tt where ...; > > > > However, in read committed mode, it is not guaranteed that the subset of > rows > > selected with the two 'where' tests will be the same... > > I can see two solutions: > > BEGIN; > SET TRANSACTION SERIALIZABLE > select ID from tt where ... order by ID asc for update; > update table tt where ...; > COMMIT; > > or > > BEGIN; > LOCK TABLE tt IN SHARE MODE; > select ID from tt where ... order by ID asc for update; > update table tt where ...; > COMMIT; > > Depending on exactly what you need to accomplish. > > > > [snip] > > > > > > > Careful inspection of these (unfortunately complex) queries seems to > > > > indicate row-level locks are acquired in consistent order, assuming > that > > > any > > > > command of the type > > > > > > > > update tt where .... > > > > > > > > will always lock rows in a consistent order (can someone confirm that > it > > > is > > > > necessarily the case). > > > > > > I believe that assertion is incorrect. Without seeing your entire > > > query, I can only speculate, but unless you have an explicit ordering > > > clause, there's no guarantee what order rows will be accessed in. > > > > > > Try putting an explicit ORDER BY in the queries and see if the problem > > > goes away. > > -- > Bill Moran > Collaborative Fusion Inc. > http://people.collaborativefusion.com/~wmoran/ > > [EMAIL PROTECTED] > Phone: 412-422-3463x4023 > Thanks, Bill. For the past fews days, I've been making the application work using your second suggestion (what I meant by "some form of explicit locking"). It does works. However, it is the first explicit lock used in this growing application and I was trying to not have any. The rows of this table are, from a business logic perspective, partitioned into distinct sets (think one set per customer) and a transaction would never involve rows across these sets. So in the original design, concurrency is great across customer requests. By using the table-level lock, writing transactions targeting these non-overlapping per-customer sets end up waiting on each other... If that is the best we can do, that is it. However, I do not understand why acquiring row-level locks in consistent order does not seem to be enough, or if that is so, why my methodology to enforce this ordering is flawed. Note that I also use foreign keys and I am aware of the fact that constraints are verified (and lock acquired) after row insertion and this is taken into account as well. I could have a "per-customer" lock, to improve concurrency across customers while avoiding deadlocks, or use seralizable transactions, but I was wondering if a methodology to acomplish what I was originally trying to do has been documented anywhere. Thanks a lot for your suggestions. Regards, A.