Robert Creager <[EMAIL PROTECTED]> writes: > tassiv=# select relname, pg_locks.* from pg_locks left join pg_class on ( > pg_locks.relation = pg_class.oid );
Okay, let's chase it back. The VACUUM (pid 29549) is blocked trying to lock temp_obs_i for vacuuming: > temp_obs_i | 50033625 | 50032754 | | 29549 | > ShareUpdateExclusiveLock | f which is evidently blocked behind the REINDEX's lock request: > temp_obs_i | 50033625 | 50032754 | | 29335 | > AccessExclusiveLock | f since there isn't anything else that would block it. The REINDEX is blocked by these locks: > temp_obs_i | 50033625 | 50032754 | | 29304 | AccessShareLock > | t > temp_obs_i | 50033625 | 50032754 | | 29304 | > RowExclusiveLock | t so indeed everyone is waiting for this process: postgres 29304 20209 0 17:37 pts/4 00:00:00 postgres: robert tassiv 192.168.0.250 idle in transaction which is not waiting for any lock but is just sitting on its duff while holding quite a pile of locks. Ergo, no deadlock, just badly written client software. BTW, the 29335 process is holding rather a large pile of locks itself, way more than I'd expect for a REINDEX alone. It looks like you must be running the REINDEX inside a transaction that has already touched a number of other tables. I'd have to counsel rethinking that --- even though there's no deadlock in this case, you are surely risking deadlocks. Since REINDEX needs an exclusive lock, you should try to minimize the number of other locks you have when you start the REINDEX. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])