"Roger Sayle" <ro...@nextmovesoftware.com> writes:
> This patch introduces new RTX codes to allow the RTL passes and
> backends to consistently represent high-part multiplications.
> Currently, the RTL used by different backends for expanding
> smul<mode>3_highpart and umul<mode>3_highpart varies greatly,
> with many but not all choosing to express this something like:
>
> (define_insn "smuldi3_highpart"
>   [(set (match_operand:DI 0 "nvptx_register_operand" "=R")
>        (truncate:DI
>         (lshiftrt:TI
>          (mult:TI (sign_extend:TI
>                    (match_operand:DI 1 "nvptx_register_operand" "R"))
>                   (sign_extend:TI
>                    (match_operand:DI 2 "nvptx_register_operand" "R")))
>          (const_int 64))))]
>   ""
>   "%.\\tmul.hi.s64\\t%0, %1, %2;")
>
> One complication with using this "widening multiplication" representation
> is that it requires an intermediate in a wider mode, making it difficult
> or impossible to encode a high-part multiplication of the widest supported
> integer mode.

Yeah.  It's also a problem when representing vector ops.

> A second is that it can interfere with optimization; for
> example simplify-rtx.c contains the comment:
>
>    case TRUNCATE:
>       /* Don't optimize (lshiftrt (mult ...)) as it would interfere
>          with the umulXi3_highpart patterns.  */
>
> Hopefully these problems are solved (or reduced) by introducing a
> new canonical form for high-part multiplications in RTL passes.
> This also simplifies insn patterns when one operand is constant.
>
> Whilst implementing some constant folding simplifications and
> compile-time evaluation of these new RTX codes, I noticed that
> this functionality could also be added for the existing saturating
> arithmetic RTX codes.  Then likewise when documenting these new RTX
> codes, I also took the opportunity to silence the @xref warnings in
> invoke.texi.
>
> This patch has been tested on x86_64-pc-linux-gnu with "make bootstrap"
> and "make -k check" with no new failures.  Ok for mainline?
>
>
> 2021-09-25  Roger Sayle  <ro...@nextmovesoftware.com>
>
> gcc/ChangeLog
>       * gcc/rtl.def (SH_MULT, UH_MULT): New RTX codes for representing
>       signed and unsigned high-part multiplication respectively.
>       * gcc/simplify-rtx.c (simplify_binary_operation_1) [SH_MULT,
>       UH_MULT]: Simplify high-part multiplications by zero.
>       [SS_PLUS, US_PLUS, SS_MINUS, US_MINUS, SS_MULT, US_MULT,
>       SS_DIV, US_DIV]: Similar simplifications for saturating
>       arithmetic.
>       (simplify_const_binary_operation) [SS_PLUS, US_PLUS, SS_MINUS,
>       US_MINUS, SS_MULT, US_MULT, SH_MULT, UH_MULT]: Implement
>       compile-time evaluation for constant operands.
>       * gcc/dwarf2out.c (mem_loc_descriptor): Skip SH_MULT and UH_MULT.
>       * doc/rtl.texi (sh_mult, uhmult): Document new RTX codes.
>       * doc/md.texi (smul@var{m}3_highpart, umul@var{m3}_highpart):
>       Mention the new sh_mul and uh_mul RTX codes.
>       * doc/invoke.texi: Silence @xref "compilation" warnings.

Look like a good idea to me.  Only real comment is on the naming:
if possible, I think we should try to avoid introducing yet more
differences between optab names and rtl codes.  How about umul_highpart
for the unsigned code, to match both the optab and the existing
convention of adding “u” directly to the front of non-saturating
operations?

Things are more inconsistent for signed rtx codes: sometimes the
“s” is present and sometimes it isn't.  But since “smin” and “smax”
have it, I think we can justify having it here too.

So I think we should use smul_highpart and umul_highpart.
It's a bit more wordy than sh_mul, but still a lot shorter than
the status quo ;-)

> diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c
> index ebad5cb..b4b04b9 100644
> --- a/gcc/simplify-rtx.c
> +++ b/gcc/simplify-rtx.c
> @@ -4142,11 +4142,40 @@ simplify_context::simplify_binary_operation_1 
> (rtx_code code,
>      case US_PLUS:
>      case SS_MINUS:
>      case US_MINUS:
> +      /* Simplify x + 0 to x, if possible.  */

Nit: +/-

> +      if (trueop1 == CONST0_RTX (mode) && !HONOR_SIGNED_ZEROS (mode))

The HONOR_SIGNED_ZEROS check is redundant, since these ops don't support
modes with signed zero.

Same for the other HONOR_* macros in the patch.  E.g. I don't think
we should try to guess how infinities and saturation work together.

> +     return op0;
> +      return 0;
> +
>      case SS_MULT:
>      case US_MULT:
> +      /* Simplify x * 0 to 0, if possible.  */
> +      if (trueop1 == CONST0_RTX (mode)
> +       && !HONOR_NANS (mode)
> +       && !HONOR_SIGNED_ZEROS (mode)
> +       && !side_effects_p (op0))
> +     return op1;
> +
> +      /* Simplify x * 1 to x, if possible.  */
> +      if (trueop1 == CONST1_RTX (mode) && !HONOR_SNANS (mode))
> +     return op0;
> +      return 0;
> +
> +    case SH_MULT:
> +    case UH_MULT:
> +      /* Simplify x * 0 to 0, if possible.  */
> +      if (trueop1 == CONST0_RTX (mode)
> +       && !HONOR_NANS (mode)
> +       && !HONOR_SIGNED_ZEROS (mode)
> +       && !side_effects_p (op0))
> +     return op1;
> +      return 0;
> +
>      case SS_DIV:
>      case US_DIV:
> -      /* ??? There are simplifications that can be done.  */
> +      /* Simplify x / 1 to x, if possible.  */
> +      if (trueop1 == CONST1_RTX (mode) && !HONOR_SNANS (mode))
> +     return op0;
>        return 0;
>  
>      case VEC_SERIES:
> @@ -5011,6 +5040,63 @@ simplify_const_binary_operation (enum rtx_code code, 
> machine_mode mode,
>             }
>           break;
>         }
> +
> +     case SS_PLUS:
> +       result = wi::add (pop0, pop1, SIGNED, &overflow);

I think a goto label would be good here, so that later signed
ops can reuse this code instead of having to repeat it.
Same idea for the unsigned case.

> +          if (overflow == wi::OVF_OVERFLOW)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED);
> +       else if (overflow == wi::OVF_UNDERFLOW)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED);

Should be min_value.  Same for the other underflow handlers.

Like Andreas said, @pxref would be better where applicable.

Thanks,
Richard

> +          else if (overflow != wi::OVF_NONE)
> +         return NULL_RTX;
> +       break;
> +
> +     case US_PLUS:
> +       result = wi::add (pop0, pop1, UNSIGNED, &overflow);
> +          if (overflow != wi::OVF_NONE)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), UNSIGNED);
> +       break;
> +
> +     case SS_MINUS:
> +       result = wi::sub (pop0, pop1, SIGNED, &overflow);
> +          if (overflow == wi::OVF_OVERFLOW)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED);
> +       else if (overflow == wi::OVF_UNDERFLOW)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED);
> +          else if (overflow != wi::OVF_NONE)
> +         return NULL_RTX;
> +       break;
> +
> +     case US_MINUS:
> +       result = wi::sub (pop0, pop1, UNSIGNED, &overflow);
> +          if (overflow != wi::OVF_NONE)
> +         result = wi::min_value (GET_MODE_PRECISION (int_mode), UNSIGNED);
> +       break;
> +
> +     case SS_MULT:
> +       result = wi::mul (pop0, pop1, SIGNED, &overflow);
> +          if (overflow == wi::OVF_OVERFLOW)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED);
> +       else if (overflow == wi::OVF_UNDERFLOW)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED);
> +          else if (overflow != wi::OVF_NONE)
> +         return NULL_RTX;
> +       break;
> +
> +     case US_MULT:
> +       result = wi::mul (pop0, pop1, UNSIGNED, &overflow);
> +          if (overflow != wi::OVF_NONE)
> +         result = wi::max_value (GET_MODE_PRECISION (int_mode), UNSIGNED);
> +       break;
> +
> +     case SH_MULT:
> +       result = wi::mul_high (pop0, pop1, SIGNED);
> +       break;
> +
> +     case UH_MULT:
> +       result = wi::mul_high (pop0, pop1, UNSIGNED);
> +       break;
> +
>       default:
>         return NULL_RTX;
>       }

Reply via email to