https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89350
--- Comment #1 from Martin Sebor <msebor at gcc dot gnu.org> --- The -Wstringop-overflow warning is a consequence of pointer offsets being treated as unsigned integers, argc being signed, and the object size computation returning scalars rather than ranges. In the first test case (with argc + 0): <bb 3> [local count: 354334802]: _1 = (sizetype) argc_5(D); _2 = -_1; dst_7 = &MEM[(void *)&buf + 128B] + _2; src.0_3 = src; __builtin_memcpy (dst_7, src.0_3, _1); When computing the size of dst_7 the compute_objsize() function determines the offset _2 to be in the range [1, SIZE_MAX] and doesn't consider the fact that the upper half of the range represents negative values. As a result, it determines the size to be zero. The number of bytes to write (i.e., (size_t)(argc + 1) is [1, SIZE_MAX]). The test case makes the tacit assumption that argc is necessarily non-negative. That makes sense for the argc argument to main but not in other situations. Changing the if condition to make the assumption explicit 'if (argc > 0)' eliminates the warning. This makes range of the offset [-INT_MAX, -1], and because compute_objsize() doesn't handle negative offsets, it fails to determine the size. There's a comment indicating that is not a feature but a known limitation: /* Ignore negative offsets for now. For others, use the lower bound as the most optimistic estimate of the (remaining)size. */ If I recall I did that not because negative offsets cannot be handled better but to keep the code simple. Either way, with negative offsets handled the warning will not be issued. The missing -Wstringop-overflow in the second test case (with argc + 1): <bb 3> [local count: 354334802]: _1 = (sizetype) argc_7(D); _2 = -_1; dst_9 = &MEM[(void *)&buf + 128B] + _2; _3 = argc_7(D) + 1; _4 = (long unsigned int) _3; src.0_5 = src; __builtin_memcpy (dst_9, src.0_5, _4); is due to the number of bytes to write is [0, INT_MAX] so the warning doesn't trigger. The bogus warning can be avoided in the first case simply by punting on offsets that could be in the negative range, but almost certainly not without some false negatives. I'm not sure it's necessarily a good tradeoff (I don't know that it isn't either). Is this code representative of some wide-spread idiom? A more robust solution, one that gets rid of the false positives without as many false negatives, will involve changing compute_objsize() to return a range of sizes rather than a constant. But that will have to wait for GCC 10.