When an IOMMUMemoryRegion is in front of a virtio device, address_space_cache_init does not set cache->ptr as the memory region is not RAM. However when the device performs an access, we end up in glue() which performs the translation and then uses MAP_RAM. This latter uses the unset ptr and returns a wrong value which leads to a SIGSEV in address_space_lduw_internal_cached_slow, for instance. Let's test whether the cache->ptr is set, and in the negative use the old macro definition. This fixes the use cases featuring vIOMMU (Intel and ARM SMMU) which lead to a SIGSEV.
Fixes: 48564041a73a (exec: reintroduce MemoryRegion caching) Signed-off-by: Eric Auger <eric.au...@redhat.com> --- I am not sure whether it doesn't break any targeted optimization but at least it removes the SIGSEV. Signed-off-by: Eric Auger <eric.au...@redhat.com> --- exec.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/exec.c b/exec.c index f6645ed..46fbd25 100644 --- a/exec.c +++ b/exec.c @@ -3800,7 +3800,9 @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr, #define SUFFIX _cached_slow #define TRANSLATE(...) address_space_translate_cached(cache, __VA_ARGS__) #define IS_DIRECT(mr, is_write) memory_access_is_direct(mr, is_write) -#define MAP_RAM(mr, ofs) (cache->ptr + (ofs - cache->xlat)) +#define MAP_RAM(mr, ofs) (cache->ptr ? \ + (cache->ptr + (ofs - cache->xlat)) : \ + qemu_map_ram_ptr((mr)->ram_block, ofs)) #define INVALIDATE(mr, ofs, len) invalidate_and_set_dirty(mr, ofs, len) #define RCU_READ_LOCK() ((void)0) #define RCU_READ_UNLOCK() ((void)0) -- 2.5.5