Hi Richard,
(define_insn_and_split "*thumb2_smaxsi3"
- [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
- (smax:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
- (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r ,r,l")
+ (smax:SI (match_operand:SI 1 "s_register_operand" "0,r,?r,0,0")
+ (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI,r,Py")))
(clobber (reg:CC CC_REGNUM))]
min/max operations are commutative. Marking this pattern as such would
make the second alternative redundant. Furthermore, splitting the first
pattern into a register only variant and an immediate variant would mean
you don't then need one of your additional alternatives later on. I
think you want constraints of the form
op0: "=r,l,r,r"
op1: "%0,0,0,r"
op2: "r,Py,I,r"
This has the added advantage that you can now get a more accurate length
calculation for the first two cases.
Similarly for the other min/max operations.
Done, I've also converted the splitters to generate cond_execs instead of
if_then_else to make it more explicit that we expect one conditional move from
this pattern.
(define_insn_and_split "*thumb2_abssi2"
- [(set (match_operand:SI 0 "s_register_operand" "=r,&r")
+ [(set (match_operand:SI 0 "s_register_operand" "=Ts,&r")
(abs:SI (match_operand:SI 1 "s_register_operand" "0,r")))
I think this pattern should be reworked to put the second alternative
first. In thumb state that will be more efficient (two instructions
rather than three). There's also now an argument for splitting out the
'l' and 'r' alteratives of the current 'it' variant and giving more
accurate length costs for the two.
Likewise for thumb2_neg_abssi2.
Done.
(define_insn_and_split "*thumb2_mov_scc"
- [(set (match_operand:SI 0 "s_register_operand" "=r")
+ [(set (match_operand:SI 0 "s_register_operand" "=r, =l")
'=' applies to all the alternatives. Only put it at the start of the
constraint string.
Also applies to thumb2_mov_negscc.
Done.
(define_insn_and_split "*thumb2_mov_notscc"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(not:SI (match_operator:SI 1 "arm_comparison_operator"
[(match_operand 2 "cc_register" "") (const_int 0)])))]
- "TARGET_THUMB2"
+ "TARGET_THUMB2 && !arm_restrict_it"
For restricted IT and low regs, this could be reworked as
mvn rd, #0
it <cond>
lsl<cond> rd, rd, #1
Reworked splitter.
+(define_insn "*thumb2_ior_scc_strict_it"
+ [(set (match_operand:SI 0 "s_register_operand" "=l,=l")
Repeated '='
Fixed.
Is this better?
Tested on model --with-arch=armv8-a with and without -mthumb. Bootstrapped on
a Cortex-A15.
Thanks,
Kyrill
2013-07-11 Kyrylo Tkachov <kyrylo.tkac...@arm.com>
* config/arm/predicates.md (shiftable_operator_strict_it): New
predicate.
* config/arm/thumb2.md (thumb_andsi_not_shiftsi_si): Disable cond_exec
version
for arm_restrict_it.
(thumb2_smaxsi3): Convert to generate cond_exec.
(thumb2_sminsi3): Likewise.
(thumb32_umaxsi3): Likewise.
(thumb2_uminsi3): Likewise.
(thumb2_abssi2): Adjust constraints for arm_restrict_it.
(thumb2_neg_abssi2): Likewise.
(thumb2_mov_scc): Add alternative for 16-bit encoding.
(thumb2_movsicc_insn): Adjust alternatives.
(thumb2_mov_negscc): Disable for arm_restrict_it.
(thumb2_mov_negscc_strict_it): New pattern.
(thumb2_mov_notscc_strict_it): New pattern.
(thumb2_mov_notscc): Disable for arm_restrict_it.
(thumb2_ior_scc): Likewise.
(thumb2_ior_scc_strict_it): New pattern.
(thumb2_cond_move): Adjust for arm_restrict_it.
(thumb2_cond_arith): Disable for arm_restrict_it.
(thumb2_cond_arith_strict_it): New pattern.
(thumb2_cond_sub): Adjust for arm_restrict_it.
(thumb2_movcond): Likewise.
(thumb2_extendqisi_v6): Disable cond_exec variant for arm_restrict_it.
(thumb2_zero_extendhisi2_v6): Likewise.
(thumb2_zero_extendqisi2_v6): Likewise.
(orsi_notsi_si): Likewise.
(orsi_not_shiftsi_si): Likewise.