https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118220
--- Comment #3 from Jonathan Gruber <jonathan.gruber.jg at gmail dot com> --- (In reply to Richard Biener from comment #2) > I think we discussed replacing if (!p) with if (size > N) with some special N > but disregarded it with a reason I don't exactly remember. > > Yes, the testcase invokes UB after the change, but it also relies on > behavior of glibc here (but of course the valid "glibc program" is no > longer valid after the change). > > Consider an implementation that does > > void *malloc (size_t) { return NULL; } > > would > > int test(void) { > char *const p = malloc(4); > > if (!p) { > return 0; > } > > int i = 1 / 0; > > free(p); > > return i; > } > > be then invalid to optimize to eliding the malloc/free and the if (!p) > check? Alternatively the program might know memory is exhausted when calling > test(), and malloc () to always fail. But can the programmer really > rely on this? > > So I'm not sure if this is really a bug. > > There's a flag to disable the optimization. Not sure if this completely answers the question, but, as far as I know, the PTRDIFF_MAX request-size limit for glibc's malloc is documented behavior, so, provided the program links against glibc, the programmer can certainly rely on this behavior of glibc's malloc. So, from this perspective, GCC's choice to elide the malloc/free in the test case is arguably a bug.