On 12/5/23 06:59, Roger Sayle wrote:
This patch improves the code generated for bitfield sign extensions on
ARC cpus without a barrel shifter.
Compiling the following test case:
int foo(int x) { return (x<<27)>>27; }
with -O2 -mcpu=em, generates two loops:
foo: mov lp_count,27
lp 2f
add r0,r0,r0
nop
2: # end single insn loop
mov lp_count,27
lp 2f
asr r0,r0
nop
2: # end single insn loop
j_s [blink]
and the closely related test case:
struct S { int a : 5; };
int bar (struct S *p) { return p->a; }
generates the slightly better:
bar: ldb_s r0,[r0]
mov_s r2,0 ;3
add3 r0,r2,r0
sexb_s r0,r0
asr_s r0,r0
asr_s r0,r0
j_s.d [blink]
asr_s r0,r0
which uses 6 instructions to perform this particular sign extension.
It turns out that sign extensions can always be implemented using at
most three instructions on ARC (without a barrel shifter) using the
idiom ((x&mask)^msb)-msb [as described in section "2-5 Sign Extension"
of Henry Warren's book "Hacker's Delight"]. Using this, the sign
extensions above on ARC's EM both become:
bmsk_s r0,r0,4
xor r0,r0,32
sub r0,r0,32
which takes about 3 cycles, compared to the ~112 cycles for the loops
in foo.
Tested with a cross-compiler to arc-linux hosted on x86_64,
with no new (compile-only) regressions from make -k check.
Ok for mainline if this passes Claudiu's nightly testing?
2023-12-05 Roger Sayle<ro...@nextmovesoftware.com>
gcc/ChangeLog
* config/arc/arc.md (*extvsi_n_0): New define_insn_and_split to
implement SImode sign extract using a AND, XOR and MINUS sequence.
gcc/testsuite/ChangeLog
* gcc.target/arc/extvsi-1.c: New test case.
* gcc.target/arc/extvsi-2.c: Likewise.
Thanks in advance,
Roger
--
patchar.txt
diff --git a/gcc/config/arc/arc.md b/gcc/config/arc/arc.md
index bf9f88eff047..5ebaf2e20ab0 100644
--- a/gcc/config/arc/arc.md
+++ b/gcc/config/arc/arc.md
@@ -6127,6 +6127,26 @@ archs4x, archs4xd"
""
[(set_attr "length" "8")])
+(define_insn_and_split "*extvsi_n_0"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (sign_extract:SI (match_operand:SI 1 "register_operand" "0")
+ (match_operand:QI 2 "const_int_operand")
+ (const_int 0)))]
+ "!TARGET_BARREL_SHIFTER
+ && IN_RANGE (INTVAL (operands[2]), 2,
+ (optimize_insn_for_size_p () ? 28 : 30))"
+ "#"
+ "&& 1"
+[(set (match_dup 0) (and:SI (match_dup 0) (match_dup 3)))
+ (set (match_dup 0) (xor:SI (match_dup 0) (match_dup 4)))
+ (set (match_dup 0) (minus:SI (match_dup 0) (match_dup 4)))]
+{
+ int tmp = INTVAL (operands[2]);
+ operands[3] = GEN_INT (~(HOST_WIDE_INT_M1U << tmp));
+ operands[4] = GEN_INT (HOST_WIDE_INT_1U << tmp);
Shouldn't operands[4] be GEN_INT ((HOST_WIDE_INT_1U << tmp) - 1)?
Otherwise it's flipping the wrong bit AFAICT.
H8 can benefit from the same transformation which is how I found this
little goof. It's not as big a gain as ARC, but it does affect one of
those builtin-overflow tests which tend to dominate testing time on the H8.
jeff