https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102951
Jakub Jelinek <jakub at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jakub at gcc dot gnu.org --- Comment #1 from Jakub Jelinek <jakub at gcc dot gnu.org> --- extern int a[]; int * foo (void) { int *p1 = &a[1]; int *p2 = &a[2]; return p1 < p2 ? p1 : p2; } int bar (void) { int *p1 = &a[1]; int *p2 = &a[2]; return p1 < p2; } For the latter function, we optimize it in match.pd: /* When the addresses are not directly of decls compare base and offset. This implements some remaining parts of fold_comparison address comparisons but still no complete part of it. Still it is good enough to make fold_stmt not regress when not dispatching to fold_binary. */ (for cmp (simple_comparison) (simplify (cmp (convert1?@2 addr@0) (convert2? addr@1)) (with { poly_int64 off0, off1; ... So, I guess for MIN_EXPR/MAX_EXPR with ADDR_EXPR operands, we can optimize it similarly, the question is if we should try to do that through repeating that huge code from there, or try to outline big parts of that into a helper function, or perhaps could we e.g. do (with { #if GENERIC tree l = generic_simplify (..., LT_EXPR, ...); #else tree l = gimple_simplify (..., LT_EXPR, ...); #endif } (if (l && integer_zerop (l)) @0) (if (l && integer_nonzerop (l)) @1))) or so? and therefore try to fold LT_EXPR instead of MIN_EXPR or MAX_EXPR and if that folds into integer_zerop or integer_nonzerop