https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71945

--- Comment #8 from GCC Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Jonathan Wakely <r...@gcc.gnu.org>:

https://gcc.gnu.org/g:faae3692f75003f5df226ed776d7386bf848dd00

commit r16-3811-gfaae3692f75003f5df226ed776d7386bf848dd00
Author: Jonathan Wakely <jwak...@redhat.com>
Date:   Thu Jul 17 00:21:54 2025 +0100

    libstdc++: Allow std::shared_ptr reference counts to be negative [PR71945]

    This change doubles the effective range of the std::shared_ptr and
    std::weak_ptr reference counts for most 64-bit targets.

    The counter type, _Atomic_word, is usually a signed 32-bit int (except
    on Solaris v9 where it is a signed 64-bit long). The return type of
    std::shared_ptr::use_count() is long. For targets where long is wider
    than _Atomic_word (most 64-bit targets) we can treat the _Atomic_word
    reference counts as unsigned and allow them to wrap around from their
    most positive value to their most negative value without any problems.
    The logic that operates on the counts only cares if they are zero or
    non-zero, and never performs relational comparisons. The atomic
    fetch_add operations on integers are required by the standard to behave
    like unsigned types, so that overflow is well-defined:

      "the result is as if the object value and parameters were converted to
      their corresponding unsigned types, the computation performed on those
      types, and the result converted back to the signed type."

    So if we allow the counts to wrap around to negative values, all we need
    to do is cast the value to make_unsigned_t<_Atomic_word> before
    returning it as long from the use_count() function.

    In practice even exceeding INT_MAX is extremely unlikely, as it would
    require billions of shared_ptr or weak_ptr objects to have been
    constructed and never destroyed. However, if that happens we now have
    double the range before the count returns to zero and causes problems.

    Some of the member functions for the _Sp_counted_base<_S_single>
    specialization are adusted to use the __atomic_add_single and
    __exchange_and_add_single helpers instead of plain ++ and -- operations.
    This is done because those helpers use unsigned arithmetic, where the
    plain increments and decrements would have undefined behaviour on
    overflow.

    libstdc++-v3/ChangeLog:

            PR libstdc++/71945
            * include/bits/shared_ptr_base.h
            (_Sp_counted_base::_M_get_use_count): Cast _M_use_count to
            unsigned before returning as long.
            (_Sp_counted_base<_S_single>::_M_add_ref_copy): Use atomic
            helper function to adjust ref count using unsigned arithmetic.
            (_Sp_counted_base<_S_single>::_M_weak_release): Likewise.
            (_Sp_counted_base<_S_single>::_M_get_use_count): Cast
            _M_use_count to unsigned before returning as long.
            (_Sp_counted_base<_S_single>::_M_add_ref_lock_nothrow): Use
            _M_add_ref_copy to do increment using unsigned arithmetic.
            (_Sp_counted_base<_S_single>::_M_release): Use atomic helper and
            _M_weak_release to do decrements using unsigned arithmetic.
            (_Sp_counted_base<_S_mutex>::_M_release): Add comment.
            (_Sp_counted_base<_S_single>::_M_weak_add_ref): Remove
            specialization.

    Reviewed-by: Tomasz KamiÅski <tkami...@redhat.com>

Reply via email to