On Fri, 2006-07-21 at 14:17, Claire McLister wrote: > Yes, that could be the case. We have a python function that imports > CSV files, which can take a long time, and that may have been running > during that time. I didn't look at the pg_lock file. What should I be > looking for?
I have the following in ~/.psqlrc: ---------- snip here --------------------- prepare locks(bigint) as select c.relname, l.* from pg_locks l left outer join pg_class c on c.oid=l.relation where pid=$1 union all select c.relname, l.* from pg_locks l left outer join pg_class c on c.oid=l.relation where l.pid in (select ml.pid from pg_locks ml, pg_locks cl where cl.pid=$1 and not cl.granted and cl.transaction = ml.transaction and ml.mode = 'ExclusiveLock'); \set lck 'execute locks' \set ps 'SELECT procpid, substring(current_query for 97), to_char((now()-query_start), \'HH24:MI:SS\') as t FROM pg_stat_activity where current_query not like \'%<insufficient%\' and current_query not like \'%IDLE%\' order by t desc;' ---------- snip here --------------------- Then use: dbprompt=> :lck(pid); where "pid" is the process id of the backend of your blocking query. That's also easy to find out if you enable command strings in the config file (it won't work without that, i.e. you will see the backends but not the queries, and then it's useless for your purpose), and use the :ps defined above, which is optimized for my terminal's width, so you could change the line truncation size (set to 97 in my case) to fit yours. HTH, Csaba. ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend