Rename to fold_multiply2, and handle muls2_i32, mulu2_i64,
and muls2_i64.
Signed-off-by: Richard Henderson
---
tcg/optimize.c | 44 +++-
1 file changed, 35 insertions(+), 9 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 735eec6462..ae4643
Pull the "op r, a, a => mov r, a" optimization into a function,
and use it in the outer opcode fold functions.
Signed-off-by: Richard Henderson
---
tcg/optimize.c | 39 ---
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/tcg/optimize.c b/tcg/op
Signed-off-by: Richard Henderson
---
tcg/optimize.c | 53 +-
1 file changed, 31 insertions(+), 22 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 1366bbaa17..1361bffab9 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -917,6 +917
Pull the "op r, 0, b => movi r, 0" optimization into a function,
and use it in fold_shift.
Signed-off-by: Richard Henderson
---
tcg/optimize.c | 28 ++--
1 file changed, 10 insertions(+), 18 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 3b0be1c4e1..6936
This "garbage" setting pre-dates the addition of the type
changing opcodes INDEX_op_ext_i32_i64, INDEX_op_extu_i32_i64,
and INDEX_op_extr{l,h}_i64_i32.
So now we have a definitive points at which to adjust z_mask
to eliminate such bits from the 32-bit operands.
Signed-off-by: Richard Henderson
-
Even though there is only one user, place this more complex
conversion into its own helper.
Signed-off-by: Richard Henderson
---
tcg/optimize.c | 84 --
1 file changed, 47 insertions(+), 37 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
101 - 106 of 106 matches
Mail list logo