Hi,

On Fri, Nov 15 2024, Richard Biener wrote:
> On Fri, Nov 15, 2024 at 1:45 PM Jan Hubicka <hubi...@ucw.cz> wrote:
>>
>> >
>> > The patch only ever skips just one conversion, never a chain of them and
>> > deliberately so, for the reasons shown in the example.
>> >
>> > However, I have just realized that combining pass_through jump functions
>> > during inlining may unfortunately have exactly the same effect, so we
>> > indeed need to be more careful.
>>
>> Ah, right.  I was thinking if I can trigger someting like this and this
>> option did not came to my mind.
>> >
>> > The motivating example converts a bool (integer with precision one) to
>> > an int, so what about the patch below which allows converting between
>> > integers and to a higher precision?  (Assuming the patch will pass
>> > bootstrap and testing on x86_64-linux which is underway).
>>
>> Allowing only conversion that monotonously increase precision looks.

As in "looks better?"  Or perhaps even "looks good?"

>> Perhaps Richi may have opinion here.
>> In a way this is similar to what we do in gimple_call_builtin that has
>> some type checking and also allows widening conversions.  So perhaps
>> this can be unified.
>>
>> I just noticed that testusite has few examples that, for example, define
>>
>> void *calloc (long, long)
>>
>> and this makes the test fail since parameter is really unsigned long
>> and in the turn we disable some calloc optimizations even though this
>> does not affect code generation.  Some passes use
>> gimple_call_builtin while other look up callee decl by hand.
>
> I think all conversions that are not value changing (based on incoming range)
> are OK.  Even signed int with [3, 10] -> unsigned char [3, 10] would be OK.
> But signed int with [-1, 1] -> unsigned char [0, 1] [ 0xff ] might
> cause problems.
>

Right, I was not going to use ranges for this because I suspected that
more often than not the range would be unknown.  But if that's the
suggestion, would something like the following (only very mildly tested)
function be OK?

I kept the type check because it looks quite a bit cheaper and would
work also for conversions between integers with the same precision but
different signedness which we can safely fold_convert between as well.

---------- 8< ----------

/* If T is an SSA_NAME that is the result of a simple type conversion
   statement from an integer type to another integer which is known to
   be able to represent the values the operand of conversion can hold,
   return the operand of that conversion, otherwise return T.  */

static tree
skip_a_safe_conversion_op (tree t)
{
  if (TREE_CODE (t) != SSA_NAME
      || SSA_NAME_IS_DEFAULT_DEF (t))
    return t;

  gimple *def = SSA_NAME_DEF_STMT (t);
  if (!is_gimple_assign (def)
      || !CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (def))
      || !INTEGRAL_TYPE_P (TREE_TYPE (t))
      || !INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (def))))
    return t;

  tree rhs1 = gimple_assign_rhs1 (def);
  if (TYPE_PRECISION (TREE_TYPE (t))
      >= TYPE_PRECISION (TREE_TYPE (rhs1)))
    return gimple_assign_rhs1 (def);

  if (!ipa_vr_supported_type_p (TREE_TYPE (rhs1)))
    return t;
  value_range vr (TREE_TYPE (rhs1));
  if (!get_range_query (cfun)->range_of_expr (vr, rhs1, def)
      || vr.undefined_p ())
    return t;

  widest_int new_minv = wi::to_widest (TYPE_MIN_VALUE (TREE_TYPE (t)));
  widest_int new_maxv = wi::to_widest (TYPE_MAX_VALUE (TREE_TYPE (t)));
  irange &ir = as_a<irange> (vr);
  if (new_minv <= widest_int::from (ir.lower_bound (),
                                    TYPE_SIGN (TREE_TYPE (rhs1)))
      && new_maxv >= widest_int::from (ir.upper_bound (),
                                       TYPE_SIGN (TREE_TYPE (rhs1))))
    return gimple_assign_rhs1 (def);

  return t;
}

---------- 8< ----------

Note that while the value-range based approach also works for the
motivating test-case, that is simply because lower_bound() and
upper_bound() methods invoked on a VARYING value range of an integer
type yield the minimum and maximum value respectively.  This happens
because get_range_query (cfun)->range_of_expr invoked to find the range
of _1 at the conversion statement in the function below indeed gives
back a VARYING one.

__attribute__((noinline))
int foo (int (*<T3e3>) (int) f)
{
  _Bool _1;
  int _2;
  int _6;

  <bb 2> [local count: 1073741824]:
  _1 = f_3(D) == 0B;
  _2 = (int) _1;
  _6 = bar (_2);
  return _6;

}

On the other hand, when I added a simple fprintf before the return in
the above function when we were able to determine that the conversion is
safe from a value range and not from simply looking at the precision and
then ran make stage2-bubble (C, C++ and Fortran), it did trigger 269
times.  So there is some potential even if it is not huge.

I think that the motivating test case is a useful one and so I'd like to
get some form of the patch in for GCC 15.  I'll be grateful for any
feedback on how to determine a conversion is safe.

Martin

Reply via email to