Currently for example in fold_sign_changed_comparison we produce integer constants that are not inside the range of its type values denoted by [TYPE_MIN_VALUE (t), TYPE_MAX_VALUE (t)]. For example consider a type with range [10, 20] and the comparison created by the Ada frontend:
if ((signed char)t == -128) t being of that type [10, 20] with TYPE_PRECISION 8, like the constant -128. So fold_sign_changed_comparison comes along and decides to strip the conversion and convert the constant to type T which looks like <integer_type 0x2b8156099f00 j__target_type___XDLU_10__20 type <integer_type 0x2b8156099c00 js__TtB sizes-gimplified public visited QI size <integer_cst 0x2b8155fdb7e0 constant invariant visited 8> unit size <integer_cst 0x2b8155fdb810 constant invariant visited 1> user align 8 symtab 0 alias set 4 canonical type 0x2b8156099c00 precision 8 min <integer_cst 0x2b8156096ab0 -128> max <integer_cst 0x2b8156096cc0 127> RM size <integer_cst 0x2b8155fdb7e0 8>> readonly sizes-gimplified public unsigned QI size <integer_cst 0x2b8155fdb7e0 8> unit size <integer_cst 0x2b8155fdb810 1> user align 8 symtab 0 alias set 4 canonical type 0x2b8156099f00 precision 8 min <integer_cst 0x2b8156096d20 10> max <integer_cst 0x2b8156096d80 20> RM size <integer_cst 0x2b8155fdb660 type <integer_type 0x2b8155fe80c0 bit_size_type> constant invariant 5>> (note it's unsigned!) So the new constant gets produced using force_fit_type_double with the above (unsigned) type and the comparison now prints as if (t == 128) and the new constant 128 now being out of range of its type: <integer_cst 0x2b81560a2540 type <integer_type 0x2b8156099f00 j__target_type___XDLU_10__20> constant invariant 128> (see the min/max values of that type above). What do we want to do about that? Do we want to do anything about it? If we don't want to do anything about it, why care about an exact TREE_TYPE of integer constants if the only thing that matters is signedness and type precision? Thanks for any hints, Richard.