Hi Richard,

On 1/8/23 20:42, Richard Henderson wrote:
Replace MMULookupPageData* with CPUTLBEntryFull, addr, size.
Move QEMU_IOTHREAD_LOCK_GUARD to the caller.

This simplifies the usage from do_ld16_beN and do_st16_leN, where
we weren't locking the entire operation, and required hoop jumping
for passing addr and size.

Signed-off-by: Richard Henderson <richard.hender...@linaro.org>
---
  accel/tcg/cputlb.c | 65 +++++++++++++++++++++++-----------------------
  1 file changed, 32 insertions(+), 33 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index ba44501a7c..d28606b93e 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2066,24 +2066,21 @@ static void *atomic_mmu_lookup(CPUArchState *env, vaddr 
addr, MemOpIdx oi,
  /**
   * do_ld_mmio_beN:
   * @env: cpu context
- * @p: translation parameters
+ * @full: page parameters
   * @ret_be: accumulated data
+ * @addr: virtual address
+ * @size: number of bytes
   * @mmu_idx: virtual address context
   * @ra: return address into tcg generated code, or 0
   *
- * Load @p->size bytes from @p->addr, which is memory-mapped i/o.
+ * Load @size bytes from @addr, which is memory-mapped i/o.
   * The bytes are concatenated in big-endian order with @ret_be.

Do you mind adding:

     * Called with iothread lock held.

here and in do_st_mmio_leN()?

Otherwise:
Reviewed-by: Philippe Mathieu-Daudé <phi...@linaro.org>

   */


Reply via email to