Hi all The attached patch set follows on from the discussion in [1] "Add LWLock blocker(s) information" by adding the actual LWLock* and the numeric tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.
This does not provide complete information on blockers, because it's not necessarily valid to compare any two LWLock* pointers between two process address spaces. The locks could be in DSM segments, and those DSM segments could be mapped at different addresses. I wasn't able to work out a sensible way to map a LWLock* to any sort of (tranche-id, lock-index) because there's no requirement that locks in a tranche be contiguous or known individually to the lmgr. Despite that, the patches improve the information available for LWLock analysis significantly. Patch 1 fixes a bogus tracepoint where an lwlock__acquire event would be fired from LWLockWaitForVar, despite that function never actually acquiring the lock. Patch 2 adds the tranche id and lock pointer for each trace hit. This makes it possible to differentiate between individual locks within a tranche, and (so long as they aren't tranches in a DSM segment) compare locks between processes. That means you can do lock-order analysis etc, which was not previously especially feasible. Traces also don't have to do userspace reads for the tranche name all the time, so the trace can run with lower overhead. Patch 3 adds a single-path tracepoint for all lock acquires and releases, so you only have to probe the lwlock__acquired and lwlock__release events to see all acquires/releases, whether conditional or otherwise. It also adds start markers that can be used for timing wallclock duration of LWLock acquires/releases. Patch 4 adds some comments on LWLock tranches to try to address some points I found confusing and hard to understand when investigating this topic. [1] https://www.postgresql.org/message-id/CAGRY4nz%3DSEs3qc1R6xD3max7sg3kS-L81eJk2aLUWSQAeAFJTA%40mail.gmail.com .
From 583c818e3121c0f7c6506b434497c81ae94ee9cb Mon Sep 17 00:00:00 2001 From: Craig Ringer <craig.ringer@2ndquadrant.com> Date: Thu, 19 Nov 2020 17:30:47 +0800 Subject: [PATCH v1 4/4] Comments on LWLock tranches --- src/backend/storage/lmgr/lwlock.c | 49 +++++++++++++++++++++++++++++-- 1 file changed, 46 insertions(+), 3 deletions(-) diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index cfdfa7f328..123bcc463e 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -112,11 +112,14 @@ extern slock_t *ShmemLock; * * 1. The individually-named locks defined in lwlocknames.h each have their * own tranche. The names of these tranches appear in IndividualLWLockNames[] - * in lwlocknames.c. + * in lwlocknames.c. The LWLock structs are allocated in MainLWLockArray. * * 2. There are some predefined tranches for built-in groups of locks. * These are listed in enum BuiltinTrancheIds in lwlock.h, and their names - * appear in BuiltinTrancheNames[] below. + * appear in BuiltinTrancheNames[] below. The LWLock structs are allocated + * elsewhere under the control of the subsystem that manages the tranche. The + * LWLock code does not know or care where in shared memory they are allocated + * or how many there are in a tranche. * * 3. Extensions can create new tranches, via either RequestNamedLWLockTranche * or LWLockRegisterTranche. The names of these that are known in the current @@ -196,6 +199,13 @@ static int LWLockTrancheNamesAllocated = 0; * This points to the main array of LWLocks in shared memory. Backends inherit * the pointer by fork from the postmaster (except in the EXEC_BACKEND case, * where we have special measures to pass it down). + * + * This array holds individual LWLocks and LWLocks allocated in named tranches. + * + * It does not hold locks for any LWLock that's separately initialized with + * LWLockInitialize(). Locks in tranches listed in BuiltinTrancheIds or + * allocated with LWLockNewTrancheId() can be embedded in other structs + * anywhere in shared memory. */ LWLockPadded *MainLWLockArray = NULL; @@ -593,6 +603,12 @@ InitLWLockAccess(void) * Caller needs to retrieve the requested number of LWLocks starting from * the base lock address returned by this API. This can be used for * tranches that are requested by using RequestNamedLWLockTranche() API. + * + * The locks are already initialized. + * + * This function can not be used for locks in builtin tranches or tranches + * registered with LWLockRegisterTranche(). There is no way to look those locks + * up by name. */ LWLockPadded * GetNamedLWLockTranche(const char *tranche_name) @@ -647,6 +663,14 @@ LWLockNewTrancheId(void) * * The tranche name will be user-visible as a wait event name, so try to * use a name that fits the style for those. + * + * The tranche ID should be a user-defined tranche ID acquired from + * LWLockNewTrancheId(). It is not necessary to call this for tranches + * allocated by RequestNamedLWLockTranche(). + * + * The LWLock subsystem does not know where LWLock(s) that will be assigned to + * this tranche are stored, or how many of them there are. The caller allocates + * suitable shared memory storage and initializes locks with LWLockInitialize(). */ void LWLockRegisterTranche(int tranche_id, const char *tranche_name) @@ -699,6 +723,10 @@ LWLockRegisterTranche(int tranche_id, const char *tranche_name) * * The tranche name will be user-visible as a wait event name, so try to * use a name that fits the style for those. + * + * The LWLocks allocated here are retrieved after shmem startup using + * GetNamedLWLockTranche(). They are intialized during shared memory startup so + * it is not necessary to call LWLockInitialize() on them. */ void RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks) @@ -739,10 +767,17 @@ RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks) /* * LWLockInitialize - initialize a new lwlock; it's initially unlocked + * + * For callers outside the LWLock subsystem itself, the tranche ID must either + * be a BuiltinTrancheIds entry for the calling subsysytem or a tranche ID + * assigned with LWLockNewTrancheId(). */ void LWLockInitialize(LWLock *lock, int tranche_id) { + /* Re-initialization of individual LWLocks is not permitted */ + Assert(tranche_id >= NUM_INDIVIDUAL_LWLOCKS || !IsUnderPostmaster); + pg_atomic_init_u32(&lock->state, LW_FLAG_RELEASE_OK); #ifdef LOCK_DEBUG pg_atomic_init_u32(&lock->nwaiters, 0); @@ -803,6 +838,11 @@ GetLWTrancheName(uint16 trancheId) /* * Return an identifier for an LWLock based on the wait class and event. + * + * Note that there's no way to identify a individual LWLock within a tranche by + * anything except its address. The LWLock subsystem doesn't know how many + * locks there are in all tranches and there's no requirement that they be + * stored in contiguous arrays. */ const char * GetLWLockIdentifier(uint32 classId, uint16 eventId) @@ -1010,7 +1050,7 @@ LWLockWakeup(LWLock *lock) Assert(proclist_is_empty(&wakeup) || pg_atomic_read_u32(&lock->state) & LW_FLAG_HAS_WAITERS); - /* unset required flags, and release lock, in one fell swoop */ + /* unset required flags, and release waitlist lock, in one fell swoop */ { uint32 old_state; uint32 desired_state; @@ -1837,6 +1877,9 @@ LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val) /* * LWLockRelease - release a previously acquired lock + * + * The actual lock acquire corresponding to this release happens in + * LWLockAttemptLock(). */ void LWLockRelease(LWLock *lock) -- 2.29.2
From e5f7ba0ba4c72db7f59c3d22818f532f6b0be90a Mon Sep 17 00:00:00 2001 From: Craig Ringer <craig.ringer@2ndquadrant.com> Date: Thu, 19 Nov 2020 18:15:34 +0800 Subject: [PATCH v1 1/4] Remove bogus lwlock__acquire tracepoint from LWLockWaitForVar Calls to LWLockWaitForVar fired the TRACE_POSTGRESQL_LWLOCK_ACQUIRE tracepoint, but LWLockWaitForVar() never actually acquires the LWLock. Remove it. --- src/backend/storage/lmgr/lwlock.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index 108e652179..29e29707d7 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -1726,8 +1726,6 @@ LWLockWaitForVar(LWLock *lock, uint64 *valptr, uint64 oldval, uint64 *newval) /* Now loop back and check the status of the lock again. */ } - TRACE_POSTGRESQL_LWLOCK_ACQUIRE(T_NAME(lock), LW_EXCLUSIVE); - /* * Fix the process wait semaphore's count for any absorbed wakeups. */ -- 2.29.2
From b9b93919d51710cbb3427f9d99764e013657bb3a Mon Sep 17 00:00:00 2001 From: Craig Ringer <craig.ringer@2ndquadrant.com> Date: Thu, 19 Nov 2020 17:38:45 +0800 Subject: [PATCH v1 2/4] Pass the target LWLock* and tranche ID to LWLock tracepoints Previously the TRACE_POSTGRESQL_LWLOCK_ tracepoints only received a pointer to the LWLock tranche name. This made it impossible to identify individual locks. Passing the lock pointer itself isn't perfect. If the lock is allocated inside a DSM segment then it might be mapped at a different address in different backends. It's safe to compare lock pointers between backends (assuming !EXEC_BACKEND) if they're in the individual lock tranches or an extension-requested named tranche, but not necessarily for tranches in BuiltinTrancheIds or tranches >= LWTRANCHE_FIRST_USER_DEFINED that were directly assigned with LWLockNewTrancheId(). Still, it's better than nothing; the pointer is stable within a backend, and usually between backends. --- src/backend/storage/lmgr/lwlock.c | 35 +++++++++++++++++++------------ src/backend/utils/probes.d | 18 +++++++++------- 2 files changed, 32 insertions(+), 21 deletions(-) diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index 29e29707d7..63f1a619b0 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -1322,7 +1322,8 @@ LWLockAcquire(LWLock *lock, LWLockMode mode) #endif LWLockReportWaitStart(lock); - TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode, lock, + lock->tranche); for (;;) { @@ -1344,7 +1345,8 @@ LWLockAcquire(LWLock *lock, LWLockMode mode) } #endif - TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), mode, lock, + lock->tranche); LWLockReportWaitEnd(); LOG_LWDEBUG("LWLockAcquire", lock, "awakened"); @@ -1353,7 +1355,7 @@ LWLockAcquire(LWLock *lock, LWLockMode mode) result = false; } - TRACE_POSTGRESQL_LWLOCK_ACQUIRE(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_ACQUIRE(T_NAME(lock), mode, lock, lock->tranche); /* Add lock to list of locks held by this backend */ held_lwlocks[num_held_lwlocks].lock = lock; @@ -1404,14 +1406,16 @@ LWLockConditionalAcquire(LWLock *lock, LWLockMode mode) RESUME_INTERRUPTS(); LOG_LWDEBUG("LWLockConditionalAcquire", lock, "failed"); - TRACE_POSTGRESQL_LWLOCK_CONDACQUIRE_FAIL(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_CONDACQUIRE_FAIL(T_NAME(lock), mode, lock, + lock->tranche); } else { /* Add lock to list of locks held by this backend */ held_lwlocks[num_held_lwlocks].lock = lock; held_lwlocks[num_held_lwlocks++].mode = mode; - TRACE_POSTGRESQL_LWLOCK_CONDACQUIRE(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_CONDACQUIRE(T_NAME(lock), mode, lock, + lock->tranche); } return !mustwait; } @@ -1483,7 +1487,8 @@ LWLockAcquireOrWait(LWLock *lock, LWLockMode mode) #endif LWLockReportWaitStart(lock); - TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode, lock, + lock->tranche); for (;;) { @@ -1501,7 +1506,8 @@ LWLockAcquireOrWait(LWLock *lock, LWLockMode mode) Assert(nwaiters < MAX_BACKENDS); } #endif - TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), mode, lock, + lock->tranche); LWLockReportWaitEnd(); LOG_LWDEBUG("LWLockAcquireOrWait", lock, "awakened"); @@ -1531,7 +1537,8 @@ LWLockAcquireOrWait(LWLock *lock, LWLockMode mode) /* Failed to get lock, so release interrupt holdoff */ RESUME_INTERRUPTS(); LOG_LWDEBUG("LWLockAcquireOrWait", lock, "failed"); - TRACE_POSTGRESQL_LWLOCK_ACQUIRE_OR_WAIT_FAIL(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_ACQUIRE_OR_WAIT_FAIL(T_NAME(lock), mode, lock, + lock->tranche); } else { @@ -1539,7 +1546,8 @@ LWLockAcquireOrWait(LWLock *lock, LWLockMode mode) /* Add lock to list of locks held by this backend */ held_lwlocks[num_held_lwlocks].lock = lock; held_lwlocks[num_held_lwlocks++].mode = mode; - TRACE_POSTGRESQL_LWLOCK_ACQUIRE_OR_WAIT(T_NAME(lock), mode); + TRACE_POSTGRESQL_LWLOCK_ACQUIRE_OR_WAIT(T_NAME(lock), mode, lock, + lock->tranche); } return !mustwait; @@ -1699,7 +1707,8 @@ LWLockWaitForVar(LWLock *lock, uint64 *valptr, uint64 oldval, uint64 *newval) #endif LWLockReportWaitStart(lock); - TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), LW_EXCLUSIVE); + TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), LW_EXCLUSIVE, lock, + lock->tranche); for (;;) { @@ -1718,7 +1727,7 @@ LWLockWaitForVar(LWLock *lock, uint64 *valptr, uint64 oldval, uint64 *newval) } #endif - TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), LW_EXCLUSIVE); + TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), LW_EXCLUSIVE, lock, lock->tranche); LWLockReportWaitEnd(); LOG_LWDEBUG("LWLockWaitForVar", lock, "awakened"); @@ -1844,6 +1853,8 @@ LWLockRelease(LWLock *lock) /* nobody else can have that kind of lock */ Assert(!(oldstate & LW_VAL_EXCLUSIVE)); + /* Released, though not woken yet. All releases must fire this. */ + TRACE_POSTGRESQL_LWLOCK_RELEASE(T_NAME(lock), mode, lock, lock->tranche); /* * We're still waiting for backends to get scheduled, don't wake them up @@ -1867,8 +1878,6 @@ LWLockRelease(LWLock *lock) LWLockWakeup(lock); } - TRACE_POSTGRESQL_LWLOCK_RELEASE(T_NAME(lock)); - /* * Now okay to allow cancel/die interrupts. */ diff --git a/src/backend/utils/probes.d b/src/backend/utils/probes.d index a0b0458108..89805c3a89 100644 --- a/src/backend/utils/probes.d +++ b/src/backend/utils/probes.d @@ -17,6 +17,7 @@ #define LocalTransactionId unsigned int #define LWLockMode int #define LOCKMODE int +#define LWLock void #define BlockNumber unsigned int #define Oid unsigned int #define ForkNumber int @@ -28,14 +29,15 @@ provider postgresql { probe transaction__commit(LocalTransactionId); probe transaction__abort(LocalTransactionId); - probe lwlock__acquire(const char *, LWLockMode); - probe lwlock__release(const char *); - probe lwlock__wait__start(const char *, LWLockMode); - probe lwlock__wait__done(const char *, LWLockMode); - probe lwlock__condacquire(const char *, LWLockMode); - probe lwlock__condacquire__fail(const char *, LWLockMode); - probe lwlock__acquire__or__wait(const char *, LWLockMode); - probe lwlock__acquire__or__wait__fail(const char *, LWLockMode); + probe lwlock__acquire(const char *, LWLockMode, LWLock*, int); + probe lwlock__release(const char *, LWLockMode, LWLock*, int); + probe lwlock__wait__start(const char *, LWLockMode, LWLock*, int); + probe lwlock__wait__done(const char *, LWLockMode, LWLock*, int); + probe lwlock__condacquire(const char *, LWLockMode, LWLock*, int); + probe lwlock__condacquire__fail(const char *, LWLockMode, LWLock*, int); + probe lwlock__acquire__or__wait(const char *, LWLockMode, LWLock*, int); + probe lwlock__acquire__or__wait__fail(const char *, LWLockMode, LWLock*, int); + probe lock__wait__start(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE); probe lock__wait__done(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE); -- 2.29.2
From 942e36fb368352f2a3c9c93f7f191ec5d5ef5bf7 Mon Sep 17 00:00:00 2001 From: Craig Ringer <craig.ringer@2ndquadrant.com> Date: Thu, 19 Nov 2020 18:05:39 +0800 Subject: [PATCH v1 3/4] Add to the tracepoints in LWLock routines The existing tracepoints in lwlock.c didn't mark the start of LWLock acquisition, so timing of the full LWLock acquire cycle wasn't possible without relying on debuginfo. Since this can be quite relevant for production performance issues, emit tracepoints at the start of LWLock acquire. Also add a tracepoint that's fired for all LWLock acquisitions at the moment the shared memory state changes, whether done by LWLockAcquire or LWLockConditionalAcquire. This lets tools reliably track which backends hold which LWLocks even if we add new functions that acquire LWLocks in future. Add tracepoints in LWLockWaitForVar and LWLockUpdateVar so process interaction around LWLock variable waits can be observed from trace tooling. They can cause long waits and/or deadlocks, so it's worth being able to time and track them. --- src/backend/storage/lmgr/lwlock.c | 24 ++++++++++++++++++++++++ src/backend/utils/probes.d | 8 ++++++++ 2 files changed, 32 insertions(+) diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index 63f1a619b0..cfdfa7f328 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -875,6 +875,9 @@ LWLockAttemptLock(LWLock *lock, LWLockMode mode) if (mode == LW_EXCLUSIVE) lock->owner = MyProc; #endif + /* All LWLock acquires must hit this tracepoint */ + TRACE_POSTGRESQL_LWLOCK_ACQUIRED(T_NAME(lock), mode, lock, + lock->tranche); return false; } else @@ -1238,6 +1241,9 @@ LWLockAcquire(LWLock *lock, LWLockMode mode) if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS) elog(ERROR, "too many LWLocks taken"); + TRACE_POSTGRESQL_LWLOCK_ACQUIRE_START(T_NAME(lock), mode, lock, + lock->tranche); + /* * Lock out cancel/die interrupts until we exit the code section protected * by the LWLock. This ensures that interrupts will not interfere with @@ -1390,6 +1396,9 @@ LWLockConditionalAcquire(LWLock *lock, LWLockMode mode) if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS) elog(ERROR, "too many LWLocks taken"); + TRACE_POSTGRESQL_LWLOCK_CONDACQUIRE_START(T_NAME(lock), mode, lock, + lock->tranche); + /* * Lock out cancel/die interrupts until we exit the code section protected * by the LWLock. This ensures that interrupts will not interfere with @@ -1454,6 +1463,9 @@ LWLockAcquireOrWait(LWLock *lock, LWLockMode mode) if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS) elog(ERROR, "too many LWLocks taken"); + TRACE_POSTGRESQL_LWLOCK_ACQUIRE_OR_WAIT_START(T_NAME(lock), mode, lock, + lock->tranche); + /* * Lock out cancel/die interrupts until we exit the code section protected * by the LWLock. This ensures that interrupts will not interfere with @@ -1636,6 +1648,9 @@ LWLockWaitForVar(LWLock *lock, uint64 *valptr, uint64 oldval, uint64 *newval) PRINT_LWDEBUG("LWLockWaitForVar", lock, LW_WAIT_UNTIL_FREE); + TRACE_POSTGRESQL_LWLOCK_WAITFORVAR_START(T_NAME(lock), lock, + lock->tranche, valptr, oldval, *valptr); + /* * Lock out cancel/die interrupts while we sleep on the lock. There is no * cleanup mechanism to remove us from the wait queue if we got @@ -1746,6 +1761,9 @@ LWLockWaitForVar(LWLock *lock, uint64 *valptr, uint64 oldval, uint64 *newval) */ RESUME_INTERRUPTS(); + TRACE_POSTGRESQL_LWLOCK_WAITFORVAR_DONE(T_NAME(lock), lock, lock->tranche, + valptr, oldval, *newval, result); + return result; } @@ -1768,6 +1786,9 @@ LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val) PRINT_LWDEBUG("LWLockUpdateVar", lock, LW_EXCLUSIVE); + TRACE_POSTGRESQL_LWLOCK_UPDATEVAR_START(T_NAME(lock), lock, lock->tranche, valptr, + val); + proclist_init(&wakeup); LWLockWaitListLock(lock); @@ -1808,6 +1829,9 @@ LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val) waiter->lwWaiting = false; PGSemaphoreUnlock(waiter->sem); } + + TRACE_POSTGRESQL_LWLOCK_UPDATEVAR_DONE(T_NAME(lock), lock, lock->tranche, + valptr, val); } diff --git a/src/backend/utils/probes.d b/src/backend/utils/probes.d index 89805c3a89..a62fdf61df 100644 --- a/src/backend/utils/probes.d +++ b/src/backend/utils/probes.d @@ -29,14 +29,22 @@ provider postgresql { probe transaction__commit(LocalTransactionId); probe transaction__abort(LocalTransactionId); + probe lwlock__acquired(const char *, LWLockMode, LWLock*, int); + probe lwlock__acquire__start(const char *, LWLockMode, LWLock*, int); probe lwlock__acquire(const char *, LWLockMode, LWLock*, int); probe lwlock__release(const char *, LWLockMode, LWLock*, int); probe lwlock__wait__start(const char *, LWLockMode, LWLock*, int); probe lwlock__wait__done(const char *, LWLockMode, LWLock*, int); + probe lwlock__condacquire__start(const char *, LWLockMode, LWLock*, int); probe lwlock__condacquire(const char *, LWLockMode, LWLock*, int); probe lwlock__condacquire__fail(const char *, LWLockMode, LWLock*, int); + probe lwlock__acquire__or__wait__start(const char *, LWLockMode, LWLock*, int); probe lwlock__acquire__or__wait(const char *, LWLockMode, LWLock*, int); probe lwlock__acquire__or__wait__fail(const char *, LWLockMode, LWLock*, int); + probe lwlock__waitforvar__start(const char *, LWLock*, int, uint64, uint64, uint64); + probe lwlock__waitforvar__done(const char *, LWLock*, int, uint64, uint64, uint64, bool); + probe lwlock__updatevar__start(const char *, LWLock*, int, uint64, uint64); + probe lwlock__updatevar__done(const char *, LWLock*, int, uint64, uint64); probe lock__wait__start(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE); -- 2.29.2