https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104848
--- Comment #5 from anlauf at gcc dot gnu.org --- (In reply to anlauf from comment #4) > The following example shows that bad overflow handling is a regression that > was likely introduced in 6.x: > > program p > integer, parameter :: b(0) = 1 + [ huge(1) ] > end Another potential fix for this is: diff --git a/gcc/fortran/arith.cc b/gcc/fortran/arith.cc index fc9224ebc5c..67ef10d4bf7 100644 --- a/gcc/fortran/arith.cc +++ b/gcc/fortran/arith.cc @@ -1710,8 +1720,8 @@ eval_intrinsic (gfc_intrinsic_op op, if (rc != ARITH_OK) { gfc_error (gfc_arith_error (rc), &op1->where); - if (rc == ARITH_OVERFLOW) - goto done; + // if (rc == ARITH_OVERFLOW) + // goto done; if (rc == ARITH_DIV0 && op2->ts.type == BT_INTEGER) gfc_seen_div0 = true; While this fixes the testcases in this PR, this regresses on the following: gfortran.dg/pr84734.f90 (from r8-7226, which added the above commented code), gfortran.dg/integer_exponentiation_6.F90 (from r5-7381) The latter is a weird testcase, which changed behavior and prints different values (0 for gfortran <= 8, 4611686018427387904 for gfortran >= 9). (Other compilers print either 0 oder produce an error, so the current behavior is sort of at odds with the others.) Do we have a concept for how to handle integer and real overflow depending on the flag -f(no-)range-check?