https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83239
--- Comment #9 from Jonathan Wakely <redi at gcc dot gnu.org> --- (In reply to Martin Sebor from comment #6) > This libstdc++ patch helps avoid both the warning and the bogus memset. if > Jonathan isn't opposed to this kind of annotation I think there might be > other places in vector where this approach could improve the emitted object > code. > > diff --git a/libstdc++-v3/include/bits/vector.tcc > b/libstdc++-v3/include/bits/vector.tcc > index eadce3c..8093f5e 100644 > --- a/libstdc++-v3/include/bits/vector.tcc > +++ b/libstdc++-v3/include/bits/vector.tcc > @@ -582,8 +582,13 @@ _GLIBCXX_BEGIN_NAMESPACE_CONTAINER > { > if (__n != 0) > { > - if (size_type(this->_M_impl._M_end_of_storage > - - this->_M_impl._M_finish) >= __n) > + size_type __navail = size_type(this->_M_impl._M_end_of_storage > + - this->_M_impl._M_finish); > + > + if (__navail > max_size () - size ()) > + __builtin_unreachable (); In principle I'm strongly in favour of adding annotations like this to help the compiler reason about our components like std::vector. However, this touches on a topic I raised on the LWG reflector last week: what does max_size() mean? What happens if we exceed it? Here's a program which creates a vector larger than its own max_size() #include <vector> #include <cassert> template<typename T> struct Alloc { using value_type = T; template<typename U> struct rebind { using other = Alloc<U>; }; Alloc() = default; template<typename U> Alloc(const Alloc<U>&) { } T* allocate(std::size_t n) { return std::allocator<T>().allocate(n); } void deallocate(T* p, std::size_t n) { std::allocator<T>().deallocate(p, n); } std::size_t max_size() const noexcept { return 100; } }; template<typename T, typename U> bool operator==(const Alloc<T>&, const Alloc<U>&) { return true; } template<typename T, typename U> bool operator!=(const Alloc<T>&, const Alloc<U>&) { return false; } int main() { std::vector<int, Alloc<int>> v(Alloc<int>().max_size() + 1); assert(v.size() > v.max_size()); } I think Martin's patch would make this undefined, but the standard doesn't actually say it's undefined to exceed max_size(). So the patch isn't OK. I think failing to say what happens if you exceed max_size() is a defect in the standard, but maybe it's a bug in our implementation and we need to add extra code to ensure we don't try to grow larger than max_size(). With either of those changes, the patch seems OK to me.