On 2/25/25 10:17, Philippe Mathieu-Daudé wrote:
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index a0f050ff9c..08106b6e4c 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc


+static const TCGOutOpBinary outop_add = {
+    .base.static_constraint = C_O1_I2(r, r, rJ),

So now 32-bit uses 'T' constraint (TCG_CT_CONST_S32) and we get the
signed32 cast, OK.

Yes, TCG_CT_CONST_S32 will match all constants for TCG_TYPE_I32, so there is no change in functionality.

+static const TCGOutOpBinary outop_add = {
+    .base.static_constraint = C_O1_I2(r, r, rT),

Similarly, 32-bit uses 'T' constraint (TCG_CT_CONST_S32) and we get the
signed32 cast, OK.

Yes.

@@ -2989,8 +3000,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, 
TCGType type,
                  tcg_out32(s, SUBFIC | TAI(a0, a2, a1));
              }
          } else if (const_args[2]) {
-            a2 = -a2;
-            goto do_addi_32;
+            tgen_addi(s, type, a0, a1, (int32_t)-a2);

So do we really need the (int32_t) cast here?

Interesting question.

(0) Truncate -INT32_MIN to remain a 32-bit quantity.

(1) We are *supposed* to have eliminated this case already, converting
    all subtract of constant to add of negative constant.  That said,
    patch 9 fixes one more case of this.  (There were rebase conflicts
    moving patch 9 earlier, so I took the lazy way and left it where it is.)

(2) The cast goes away when we convert INDEX_op_sub_* and forcefully
    disallow subtract of constant.

Because of (1), we *probably* don't need the cast now, but until (2) it's difficult to prove. On balance I think it's better to have it until we are sure we don't need it.

+static const TCGOutOpBinary outop_add = {
+    .base.static_constraint = C_O1_I2(r, r, rJ),

Don't we need

        .base.static_constraint = C_O1_I2(r, rz, rJ),

since commit 1bbcae5adaa ("tcg/sparc64: Use 'z' constraint")?

Constants should never appear in the first source operand.
The optimize pass will always swap commutative operands.

This patch asserts that this has been done:

+    case INDEX_op_add_i32:
+    case INDEX_op_add_i64:
+        {
+            const TCGOutOpBinary *out =
+                container_of(all_outop[op->opc], TCGOutOpBinary, base);
+
+            tcg_debug_assert(!const_args[1]);

Therefore, simplify the constraints so that they're not attempting to handle 
nonsense.


r~

Reply via email to