https://github.com/python/cpython/commit/1b2301c009e4363d90aa3babcfde0b1a8489ae6b
commit: 1b2301c009e4363d90aa3babcfde0b1a8489ae6b
branch: 3.13
author: Sam Gross <[email protected]>
committer: colesbury <[email protected]>
date: 2026-04-22T18:56:24Z
summary:

[3.13] gh-148820: Fix _PyRawMutex use-after-free on spurious semaphore wakeup 
(gh-148852) (#148885)

_PyRawMutex_UnlockSlow CAS-removes the waiter from the list and then
calls _PySemaphore_Wakeup, with no handshake. If _PySemaphore_Wait
returns Py_PARK_INTR, the waiter can destroy its stack-allocated
semaphore before the unlocker's Wakeup runs, causing a fatal error from
ReleaseSemaphore / sem_post.

Loop in _PyRawMutex_LockSlow until _PySemaphore_Wait returns Py_PARK_OK,
which is only signalled when a matching Wakeup has been observed.

Also include GetLastError() and the handle in the Windows fatal messages
in _PySemaphore_Init, _PySemaphore_Wait, and _PySemaphore_Wakeup to make
similar races easier to diagnose in the future.

(cherry picked from commit ad3c5b7958b890382f431a53349320cb7c84d405)

files:
A 
Misc/NEWS.d/next/Core_and_Builtins/2026-04-21-14-36-44.gh-issue-148820.XhOGhA.rst
M Python/lock.c
M Python/parking_lot.c

diff --git 
a/Misc/NEWS.d/next/Core_and_Builtins/2026-04-21-14-36-44.gh-issue-148820.XhOGhA.rst
 
b/Misc/NEWS.d/next/Core_and_Builtins/2026-04-21-14-36-44.gh-issue-148820.XhOGhA.rst
new file mode 100644
index 00000000000000..392becaffb73cf
--- /dev/null
+++ 
b/Misc/NEWS.d/next/Core_and_Builtins/2026-04-21-14-36-44.gh-issue-148820.XhOGhA.rst
@@ -0,0 +1,5 @@
+Fix a race in :c:type:`!_PyRawMutex` on the free-threaded build where a
+``Py_PARK_INTR`` return from ``_PySemaphore_Wait`` could let the waiter
+destroy its semaphore before the unlocking thread's
+``_PySemaphore_Wakeup`` completed, causing a fatal ``ReleaseSemaphore``
+error.
diff --git a/Python/lock.c b/Python/lock.c
index 24c404a1246130..6def8567b11a17 100644
--- a/Python/lock.c
+++ b/Python/lock.c
@@ -210,7 +210,16 @@ _PyRawMutex_LockSlow(_PyRawMutex *m)
 
         // Wait for us to be woken up. Note that we still have to lock the
         // mutex ourselves: it is NOT handed off to us.
-        _PySemaphore_Wait(&waiter.sema, -1, /*detach=*/0);
+        //
+        // Loop until we observe an actual wakeup. A return of Py_PARK_INTR
+        // could otherwise let us exit _PySemaphore_Wait and destroy
+        // `waiter.sema` while _PyRawMutex_UnlockSlow's matching
+        // _PySemaphore_Wakeup is still pending, since the unlocker has
+        // already CAS-removed us from the waiter list without any handshake.
+        int res;
+        do {
+            res = _PySemaphore_Wait(&waiter.sema, -1, /*detach=*/0);
+        } while (res != Py_PARK_OK);
     }
 
     _PySemaphore_Destroy(&waiter.sema);
diff --git a/Python/parking_lot.c b/Python/parking_lot.c
index a7e9760e35d87a..48577be47353c3 100644
--- a/Python/parking_lot.c
+++ b/Python/parking_lot.c
@@ -61,7 +61,9 @@ _PySemaphore_Init(_PySemaphore *sema)
         NULL    //  unnamed
     );
     if (!sema->platform_sem) {
-        Py_FatalError("parking_lot: CreateSemaphore failed");
+        _Py_FatalErrorFormat(__func__,
+            "parking_lot: CreateSemaphore failed (error: %u)",
+            GetLastError());
     }
 #elif defined(_Py_USE_SEMAPHORES)
     if (sem_init(&sema->platform_sem, /*pshared=*/0, /*value=*/0) < 0) {
@@ -229,7 +231,9 @@ _PySemaphore_Wakeup(_PySemaphore *sema)
 {
 #if defined(MS_WINDOWS)
     if (!ReleaseSemaphore(sema->platform_sem, 1, NULL)) {
-        Py_FatalError("parking_lot: ReleaseSemaphore failed");
+        _Py_FatalErrorFormat(__func__,
+            "parking_lot: ReleaseSemaphore failed (error: %u, handle: %p)",
+            GetLastError(), sema->platform_sem);
     }
 #elif defined(_Py_USE_SEMAPHORES)
     int err = sem_post(&sema->platform_sem);

_______________________________________________
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/python-checkins.python.org
Member address: [email protected]

Reply via email to