https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110061

Wilco <wilco at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Target Milestone|---                         |14.0

--- Comment #16 from Wilco <wilco at gcc dot gnu.org> ---
Fixed by
https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=3fa689f6ed8387d315e58169bb9bace3bd508c0a

libatomic: Enable lock-free 128-bit atomics on AArch64

Enable lock-free 128-bit atomics on AArch64.  This is backwards compatible with
existing binaries (as for these GCC always calls into libatomic, so all 128-bit
atomic uses in a process are switched), gives better performance than locking
atomics and is what most users expect.

128-bit atomic loads use a load/store exclusive loop if LSE2 is not supported.
This results in an implicit store which is invisible to software as long as the
given address is writeable (which will be true when using atomics in real
code).

This doesn't yet change __atomic_is_lock_free eventhough all atomics are
finally
lock-free on AArch64.

libatomic:
        * config/linux/aarch64/atomic_16.S: Implement lock-free ARMv8.0
atomics.
        (libat_exchange_16): Merge RELEASE and ACQ_REL/SEQ_CST cases.
        * config/linux/aarch64/host-config.h: Use atomic_16.S for baseline
v8.0.

Reply via email to