The objective here is to avoid serializing the first-fault and no-fault aarch64 sve loads.
When I first thought of this, we still had tb_lock protecting code generation and merely needed mmap_lock to prevent mapping changes while doing so. Now (or very soon), tb_lock is gone and user-only code generation dual-purposes the mmap_lock for the code generation lock. There are two other legacy users of mmap_lock within linux-user, for ppc and mips. I believe these are dead code from before these ports were converted to use tcg atomics. Thoughts? r~ Based-on: <20180621143715.27176-1-richard.hender...@linaro.org> [tcg-next] Based-on: <20180621015359.12018-1-richard.hender...@linaro.org> [v5 SVE] Richard Henderson (2): exec: Split mmap_lock to mmap_rdlock/mmap_wrlock linux-user: Use pthread_rwlock_t for mmap_rd/wrlock include/exec/exec-all.h | 6 ++-- accel/tcg/cpu-exec.c | 8 ++--- accel/tcg/translate-all.c | 4 +-- bsd-user/mmap.c | 18 +++++++++--- exec.c | 6 ++-- linux-user/elfload.c | 2 +- linux-user/mips/cpu_loop.c | 2 +- linux-user/mmap.c | 60 +++++++++++++++++++++++++------------- linux-user/ppc/cpu_loop.c | 2 +- linux-user/syscall.c | 4 +-- target/arm/sve_helper.c | 10 ++----- target/xtensa/op_helper.c | 2 +- 12 files changed, 76 insertions(+), 48 deletions(-) -- 2.17.1