Module Name: src Committed By: riastradh Date: Sat Feb 12 17:10:02 UTC 2022
Modified Files: src/common/lib/libc/arch/mips/atomic: membar_ops.S src/sys/arch/mips/include: lock.h Log Message: mips: Brush up __cpu_simple_lock. - Eradicate last vestiges of mb_* barriers. - In __cpu_simple_lock_init, omit needless barrier. It is the caller's responsibility to ensure __cpu_simple_lock_init happens before other operations on it anyway, so there was never any need for a barrier here. - In __cpu_simple_lock_try, leave comments about memory ordering guarantees of the kernel's _atomic_cas_uint, which are inexplicably different from the non-underscored atomic_cas_uint. - In __cpu_simple_unlock, use membar_exit instead of mb_memory, and do it unconditionally. This ensures that in __cpu_simple_lock/.../__cpu_simple_unlock, all memory operations in the ellipsis happen before the store that releases the lock. - On Octeon, the barrier was omitted altogether, which is a bug -- it needs to be there or else there is no happens-before relation and whoever takes the lock next might see stale values stored or even stomp over the unlocking CPU's delayed loads. - On non-Octeon, the mb_memory was sync. Using membar_exit preserves this. XXX On Octeon, membar_exit only issues syncw -- this seems wrong, only store-before-store and not load/store-before-store, unless the CNMIPS architecture guarantees it is sufficient here like SPARCv8/v9 PSO (`Partial Store Order'). - Leave an essay with citations about why we have an apparently pointless syncw _after_ releasing a lock, to work around a design bug^W^Wquirk in cnmips which sometimes buffers stores for hundreds of thousands of cycles for fun unless you issue syncw. To generate a diff of this commit: cvs rdiff -u -r1.10 -r1.11 src/common/lib/libc/arch/mips/atomic/membar_ops.S cvs rdiff -u -r1.21 -r1.22 src/sys/arch/mips/include/lock.h Please note that diffs are not public domain; they are subject to the copyright notices on the relevant files.