Hello all,

PostgreSQL version 9.4.17

We have a number of queries running on the same DB from many systems. Among 
other things, we need to INSERT / UPDATE on a table based on external events - 
this is done via a pool of 5 SQL connections which process an average of 500 
events / second. The table is used by other systems as well...

We have more and more locks on this table, which prevents it to be updated as 
it should - and I'm trying to see how to improve things without setting up an 
external queue system just to manage those events.

I've enabled "log_lock_waits = on" in the configuration, which generates the 
following kind of log entries:

2018-04-27 07:48:07 CEST [1419.269] "[unknown]" xx@yy LOG:  process 1419 still 
waiting for ExclusiveLock on advisory lock [16417,1653339453,2672182422,2] 
after 1000.166 ms
2018-04-27 07:48:07 CEST [1419.270] "[unknown]" xx@yy DETAIL:  Process holding 
the lock: 16418. Wait queue: 1419.
2018-04-27 07:48:07 CEST [1419.271] "[unknown]" xx@yy STATEMENT:  SELECT 
pg_advisory_xact_lock(1653339453,  -1622784874);
2018-04-27 07:48:07 CEST [1419.272] "[unknown]" xx@yy LOG:  process 1419 
acquired ExclusiveLock on advisory lock [16417,1653339453,2672182422,2] after 
1318.721 ms
2018-04-27 07:48:07 CEST [1419.273] "[unknown]" xx@yy STATEMENT:  SELECT 
pg_advisory_xact_lock(1653339453,  -1622784874);

I can easily identify the session owner of the different processes, but the 
process holding the lock belong to an ERP which generates a lot of different 
kind of queries - I'd like to identify precisely which one is causing this 
long-lasting lock.

It doesn't look like this is possible via the PostgreSQL logging features, and 
doing the usual lock troubleshooting "on the spot" using pg_locks is not 
practical due to the short duration and odd timing of those locks.

Does anyone have an idea on how to process in such a case?

Thank you,

Olivier

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to