https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110061

Wilco <wilco at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
         Resolution|DUPLICATE                   |---
             Status|RESOLVED                    |NEW

--- Comment #7 from Wilco <wilco at gcc dot gnu.org> ---
I don't see the issue you have here. GCC for x86/x86_64 has been using compare
exchange for atomic load (which always does a write even if the compare fails)
for many years. LLVM does the same for AArch64/x86/x86_64.

If you believe this is incorrect/invalid, do you have any evidence this causes
crashes in real applications?

As a result of GCC's bad choice of using locking atomics on AArch64, many
applications are forced to implement 128-bit atomics themselves using hacky
inline assembler. Just one example for reference:

https://github.com/boostorg/atomic/blob/08bd4e20338c503d2acfdddfdaa8f5e0bcf9006c/include/boost/atomic/detail/core_arch_ops_gcc_aarch64.hpp#L1635

The question is, do you believe compilers should provide users with fast and
efficient atomics they need? Or do you want to force every application to
implement their own version of 128-bit atomics?

Reply via email to