On 2019-Jun-15, Alvaro Herrera wrote: > But that's not the danger ... with the current coding, it's initialized > to false every time through that block; that means the tuple lock will > never be skipped if we jump back to l3. So the danger is that the first > iteration sets the variable, then jumps back; second iteration > initializes the variable again, so instead of skipping the lock, it > takes it, causing a spurious deadlock.
So, I'm too lazy today to generate a case that fully reproduces the deadlock, because you need to stall 's2' a little bit using the well-known advisory lock trick, but this one hits the code that would re-initialize the variable. I'm going to push the change of lifetime of the variable for now. setup { drop table if exists tlu_job; create table tlu_job (id integer primary key, name text); insert into tlu_job values(1, 'a'); } teardown { drop table tlu_job; } session "s0" setup { begin; set deadlock_timeout=1} step "s0_fornokeyupdate" { select id from tlu_job where id = 1 for no key update; } step "s0_update" { update tlu_job set name = 's0' where id = 1; } step "s0_commit" { commit; } session "s1" setup { begin; set deadlock_timeout=1} step "s1_for_key_share" { select id from tlu_job where id = 1 for key share; } step "s1_for_update" { select id from tlu_job where id = 1 for update; } step "s1_rollback" { rollback; } session "s2" setup { begin; set deadlock_timeout=1} step "s2_for_key_share" { select id from tlu_job where id = 1 for key share; } step "s2_for_share" { select id from tlu_job where id = 1 for share; } step "s2_rollback" { rollback; } session "s3" setup { begin; set deadlock_timeout=1} step "s3_update" { update tlu_job set name = 'c' where id = 1; } step "s3_rollback" { rollback; } permutation "s1_for_key_share" "s2_for_key_share" "s0_fornokeyupdate" "s2_for_share" "s0_update" "s0_commit" "s1_rollback" "s2_rollback" "s3_rollback" -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services