Hi Richard,
On 6/7/24 21:16, Richard Henderson wrote:
On 6/7/24 07:49, Chinmay Rath wrote:
Moving the following instructions to decodetree specification:
lxv{b16, d2, h8, w4, ds, ws}x : X-form
stxv{b16, d2, h8, w4}x : X-form
The changes were verified by validating that the tcg-ops generated
for those
instructions remain the same, which were captured using the '-d
in_asm,op' flag.
Signed-off-by: Chinmay Rath <ra...@linux.ibm.com>
---
target/ppc/insn32.decode | 10 ++
target/ppc/translate/vsx-impl.c.inc | 199 ++++++++++++----------------
target/ppc/translate/vsx-ops.c.inc | 12 --
3 files changed, 97 insertions(+), 124 deletions(-)
Because the ops are identical,
Reviewed-by: Richard Henderson <richard.hender...@linaro.org>
But you really should update these to use tcg_gen_qemu_ld/st_i128 with
the proper atomicity flags. This will fix an existing bug...
^ Surely Richard, I have noted this suggestion of yours from an earlier
patch, and plan to do this change and implement a few of your other
suggestions,
which I couldn't implement earlier, along with some clean-ups this
week.
I refrained from doing it with the decodetree movement, to take
proper time to understand and test.
Should send out those patches soon.
Thanks & Regards,
Chinmay
+static bool trans_LXVD2X(DisasContext *ctx, arg_LXVD2X *a)
{
TCGv EA;
TCGv_i64 t0;
+
+ REQUIRE_VSX(ctx);
+ REQUIRE_INSNS_FLAGS2(ctx, VSX);
+
t0 = tcg_temp_new_i64();
gen_set_access_type(ctx, ACCESS_INT);
+ EA = do_ea_calc(ctx, a->ra, cpu_gpr[a->rb]);
gen_qemu_ld64_i64(ctx, t0, EA);
+ set_cpu_vsr(a->rt, t0, true);
where the vector register is partially modified ...
tcg_gen_addi_tl(EA, EA, 8);
gen_qemu_ld64_i64(ctx, t0, EA);
before a fault from the second load is recognized.
Similarly for stores leaving memory partially modified.
r~