On 07/18/2011 08:41 AM, Georg-Johann Lay wrote:
> +(define_insn_and_split "*muluqihi3.uconst"
> +  [(set (match_operand:HI 0 "register_operand"                         "=r")
> +        (mult:HI (zero_extend:HI (match_operand:QI 1 "register_operand" "r"))
> +                 (match_operand:HI 2 "u8_operand"                       
> "M")))]
> +  "AVR_HAVE_MUL
> +   && avr_gate_split1()"
> +  { gcc_unreachable(); }
> +  "&& 1"
> +  [(set (match_dup 3)
> +        (match_dup 2))
> +   ; umulqihi3
> +   (set (match_dup 0)
> +        (mult:HI (zero_extend:HI (match_dup 1))
> +                 (zero_extend:HI (match_dup 3))))]
> +  {
> +    operands[2] = gen_int_mode (INTVAL (operands[2]), QImode);
> +    operands[3] = gen_reg_rtx (QImode);
> +  })
> +

I'm not keen on this at all.  I'd much prefer a formulation like

(define_insn_and_split "*muliqihi3_uconst"
  [(set (match_operand:HI 0 "register_operand" "=r")
        (mult:HI (zero_extend:HI
                  (match_operand:QI 1 "register_operand" "r"))
                 (match_operand:HI 2 "u8_operand" "n")))
   (clobber (match_scratch:QI 3 "=&r"))]
  "AVR_HAVE_MUL"
  "#"
  "&& reload_completed"
  [...]
)

I see the obvious problem, of course, pass_split_after_reload
runs after pass_postreload_cse.

Does anything break if we simply move pass_split_after_reload
earlier?  Right to the beginning of pass_postreload for instance.
Seems to me that every port would gain by optimizing the stuff
that comes out of the post-reload splitters.


r~

Reply via email to