On 3/12/20 7:58 AM, LIU Zhiwei wrote:
> +/* Vector Integer Merge and Move Instructions */
> +static bool opivv_vmerge_check(DisasContext *s, arg_rmrr *a)
> +{
> +    return (vext_check_isa_ill(s, RVV) &&
> +            vext_check_overlap_mask(s, a->rd, a->vm, false) &&
> +            vext_check_reg(s, a->rd, false) &&
> +            vext_check_reg(s, a->rs2, false) &&
> +            vext_check_reg(s, a->rs1, false) &&
> +            ((a->vm == 0) || (a->rs2 == 0)));
> +}
> +GEN_OPIVV_TRANS(vmerge_vvm, opivv_vmerge_check)
> +
> +static bool opivx_vmerge_check(DisasContext *s, arg_rmrr *a)
> +{
> +    return (vext_check_isa_ill(s, RVV) &&
> +            vext_check_overlap_mask(s, a->rd, a->vm, false) &&
> +            vext_check_reg(s, a->rd, false) &&
> +            vext_check_reg(s, a->rs2, false) &&
> +            ((a->vm == 0) || (a->rs2 == 0)));
> +}
> +GEN_OPIVX_TRANS(vmerge_vxm, opivx_vmerge_check)
> +
> +GEN_OPIVI_TRANS(vmerge_vim, 0, vmerge_vxm, opivx_vmerge_check)

I think you need to special case these.  The unmasked instructions are the
canonical move instructions: vmv.v.*.

You definitely want to use tcg_gen_gvec_mov (vv), tcg_gen_gvec_dup_i{32,64}
(vx) and tcg_gen_gvec_dup{8,16,32,64}i (vi).

> +        if (!vm && !vext_elem_mask(v0, mlen, i)) {                   \
> +            ETYPE s2 = *((ETYPE *)vs2 + H(i));                       \
> +            *((ETYPE *)vd + H1(i)) = s2;                             \
> +        } else {                                                     \
> +            ETYPE s1 = *((ETYPE *)vs1 + H(i));                       \
> +            *((ETYPE *)vd + H(i)) = s1;                              \
> +        }                                                            \

Perhaps better as

ETYPE *vt = (!vm && !vext_elem_mask(v0, mlen, i) ? vs2 : vs1);
*((ETYPE *)vd + H(i)) = *((ETYPE *)vt + H(i));

> +        if (!vm && !vext_elem_mask(v0, mlen, i)) {                   \
> +            ETYPE s2 = *((ETYPE *)vs2 + H(i));                       \
> +            *((ETYPE *)vd + H1(i)) = s2;                             \
> +        } else {                                                     \
> +            *((ETYPE *)vd + H(i)) = (ETYPE)(target_long)s1;          \
> +        }                                                            \

Perhaps better as

ETYPE s2 = *((ETYPE *)vs2 + H(i));
ETYPE d = (!vm && !vext_elem_mask(v0, mlen, i)
           ? s2 : (ETYPE)(target_long)s1);
*((ETYPE *)vd + H(i)) = d;

as most host platforms have a conditional reg-reg move, but not a conditional 
load.


r~

Reply via email to