On 27.02.19 16:39, Richard Henderson wrote:
> On 2/26/19 3:38 AM, David Hildenbrand wrote:
>> +static DisasJumpType op_vl(DisasContext *s, DisasOps *o)
>> +{
>> +    load_vec_element(s, TMP_VREG_0, 0, o->addr1, MO_64);
>> +    gen_addi_and_wrap_i64(s, o->addr1, o->addr1, 8);
>> +    load_vec_element(s, TMP_VREG_0, 1, o->addr1, MO_64);
>> +    gen_gvec_mov(get_field(s->fields, v1), TMP_VREG_0);
>> +    return DISAS_NEXT;
>> +}
> 
> Isn't it just as easy to load two TCGv_i64 temps and store into the correct
> vector afterward?

Yes it is, using the existing helpers was just easier. I guess I'll
change that.

> 
> Also, it is easy to honor the required alignment:

I think that would be wrong. It is only an alignment hint.

"Setting the alignment hint to a non-zero value
that doesn’t correspond to the alignment of the second operand may
reduce performance on some models."

So we must not inject an exception when unaligned. This, however would
be the result of MO_ALIGN,, right?

In essence, this is just an optimization for real hardware and can be
ignored by us completely.

> 
>     TCGMemOp mop1, mop2;
> 
>     if (m3 < 3) {
>         mop1 = mop2 = MO_TEQ;
>     } else if (m3 == 3) {
>         mop1 = mop2 = MO_TEQ | MO_ALIGN;
>     } else {
>         mop1 = MO_TEQ | MO_ALIGN_16;
>         mop2 = MO_TEQ | MO_ALIGN;
>     }
>     tcg_gen_qemu_ld_i64(tmp1, o->addr1, mem_idx, mop1);
>     gen_addi_and_wrap_i64(s, o->addr1, o->addr1, 8);
>     tcg_gen_qemu_ld_i64(tmp2, o->addr1, mem_idx, mop2);
>     write_vec_element_i64(tmp1, v1, 0, MO_64);
>     write_vec_element_i64(tmp2, v1, 1, MO_64);
> 
> 
> r~
> 


-- 

Thanks,

David / dhildenb

Reply via email to