https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109442
--- Comment #23 from Jan Hubicka <hubicka at gcc dot gnu.org> --- with Jakub's builtion_operator_new patch and https://gcc.gnu.org/pipermail/gcc-patches/2024-November/667834.html on the original testcase we now optimize away allocation and produce int vat1 (struct vector & v1) { unsigned long _9; int * _13; int * _14; long int _15; <bb 2> [local count: 1073741824]: _13 = MEM[(const struct vector *)v1_2(D)].D.34245._M_impl.D.33558._M_finish; _14 = MEM[(const struct vector *)v1_2(D)].D.34245._M_impl.D.33558._M_start; _15 = _13 - _14; _9 = (unsigned long) _15; if (_9 > 9223372036854775804) goto <bb 3>; [54.67%] else goto <bb 4>; [45.33%] <bb 3> [local count: 587014656]: std::__throw_bad_array_new_length (); <bb 4> [local count: 1015040358]: return 10; } So I guess we are missing somewhere __builtin_assert that the length of vector copied is allways smaller then half of address space...