The branch main has been updated by markj:

URL: 
https://cgit.FreeBSD.org/src/commit/?id=4b79443927ec2c53514e73b06eb2a9d241882585

commit 4b79443927ec2c53514e73b06eb2a9d241882585
Author:     Mark Johnston <ma...@freebsd.org>
AuthorDate: 2025-02-22 01:23:31 +0000
Commit:     Mark Johnston <ma...@freebsd.org>
CommitDate: 2025-02-22 01:26:38 +0000

    umtx: Fix a bug in do_lock_pp()
    
    If the lock is unowned (i.e., owner == UMUTEX_CONTESTED), we might get a
    spurious failure, and in that case we need to retry the loop.
    Otherwise, the calling thread can end up sleeping forever.
    
    The same problem exists in do_set_ceiling(), which open-codes
    do_lock_pp(), so fix it there too.
    
    Reviewed by:    olce
    Reported by:    Daniel King <dmk...@adacore.com>
    MFC after:      2 weeks
    Sponsored by:   Innovate UK
    Differential Revision:  https://reviews.freebsd.org/D49031
---
 sys/kern/kern_umtx.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/sys/kern/kern_umtx.c b/sys/kern/kern_umtx.c
index 938dcf2ff1cb..f9189024d629 100644
--- a/sys/kern/kern_umtx.c
+++ b/sys/kern/kern_umtx.c
@@ -2616,6 +2616,10 @@ do_lock_pp(struct thread *td, struct umutex *m, uint32_t 
flags,
                        }
                } else if (owner == UMUTEX_RB_NOTRECOV) {
                        error = ENOTRECOVERABLE;
+               } else if (owner == UMUTEX_CONTESTED) {
+                       /* Spurious failure, retry. */
+                       umtxq_unbusy_unlocked(&uq->uq_key);
+                       continue;
                }
 
                if (try != 0)
@@ -2825,6 +2829,10 @@ do_set_ceiling(struct thread *td, struct umutex *m, 
uint32_t ceiling,
                } else if (owner == UMUTEX_RB_NOTRECOV) {
                        error = ENOTRECOVERABLE;
                        break;
+               } else if (owner == UMUTEX_CONTESTED) {
+                       /* Spurious failure, retry. */
+                       umtxq_unbusy_unlocked(&uq->uq_key);
+                       continue;
                }
 
                /*

Reply via email to