On 06/20/2013 03:53 AM, Jani Kokkonen wrote:
>  #ifndef _EXEC_ALL_H_
>  #define _EXEC_ALL_H_
> -
>  #include "qemu-common.h"
> -

Whitespace change?

> +/* Load and compare a TLB entry, emitting the conditional jump to the
> +slow path for the failure case, which will be patched later when finalizing
> +the slow pathClobbers X0,X1,X2,X3 and TMP.  */

Indentation.

> +    tcg_out_ldst(s, TARGET_LONG_BITS == 64 ? LDST_64 : LDST_32,
> +                  LDST_LD, TCG_REG_X0, TCG_REG_X2, tlb_offset & 0xfff);
> +    tcg_out_ldst(s, LDST_64, LDST_LD, TCG_REG_X1, TCG_REG_X2,
> +        (tlb_offset & 0xfff) + (offsetof(CPUTLBEntry, addend) -
> +             (is_read ? offsetof(CPUTLBEntry, addr_read) :
> +                   offsetof(CPUTLBEntry, addr_write))));

I wonder if it wouldn't be clearer to not include the addr_read/write offset in
the passed tlb_offset value.  So more like

  int tlb_offset = offsetof(CPUArchState, tlb_table[mem_index])

  tcg_out_ldst(s, TARGET_LONG_BITS == 64 ? LDST_64 : LDST_32,
               LDST_LD, TCG_REG_X0, TCG_REG_X2,
               (tlb_offset & 0xfff) +
               (is_read ? offsetof(CPUTLBEntry, addr_read)
                : offsetof(CPUTLBEntry, addr_write)));
  tcg_out_ldst(s, LDST_64, LDST_LD, TCG_REG_X1, TCG_REG_X2,
               (tlb_offset & 0xfff) + (offsetof(CPUTLBEntry, addend));

and then in the two callers pass down mem_index instead of tlb_offset.

In addition, the function could use some commentary.


r~

Reply via email to