On 9/6/19 3:57 AM, David Hildenbrand wrote: > +static S390Access access_prepare(CPUS390XState *env, vaddr vaddr, int size, > + MMUAccessType access_type, uintptr_t ra) > +{ > + int mmu_idx = cpu_mmu_index(env, false); > + > + return access_prepare_idx(env, vaddr, size, access_type, mmu_idx, ra); > +} > + > static void access_memset_idx(CPUS390XState *env, vaddr vaddr, uint8_t byte, > int size, int mmu_idx, uintptr_t ra) > { > @@ -420,9 +428,13 @@ static uint32_t do_helper_mvc(CPUS390XState *env, > uint32_t l, uint64_t dest, > } else if (!is_destructive_overlap(env, dest, src, l)) { > access_memmove(env, dest, src, l, ra); > } else { > + S390Access srca = access_prepare(env, src, l, MMU_DATA_LOAD, ra); > + S390Access desta = access_prepare(env, dest, l, MMU_DATA_STORE, ra);
I was just thinking it might be better to drop the non-idx functions: access_prepare, access_memset, access_memmove. What this is leading to is computation of cpu_mmu_index multiple times, as here. It could just as easily be hoisted to the top of do_helper_mvc and used in all of the sub-cases. That said, the code here is correct so Reviewed-by: Richard Henderson <richard.hender...@linaro.org> r~