Hello Igor, everyone, we are seemingly running into the issue with "virtio: error trying to map MMIO memory" on a 'legacy' vhost-net with 64 regions on VMs with relatively small amount of DIMMs, less than ten of 512Mb and larger ones for which it could appear literally on every boot. I could suggest to link the problem to the memory fragmentation as busier hypervisors tending to halt VM more frequently on this error, though it is still a very rare issue and could not be reproduced deterministically. Also there seems to be a (very unobvious) link to the CVE-2015-5307, because we started seeing these stops more frequently after rolling in an appropriate patch. Before it we saw this stop for good once per 1M machine-hours and now it appears ~20 times more frequent all across the infrastructure. Could 4de7255f7d2be5e51664c6ac6011ffd6e5463571 + 1e0994730f772580ff98754eb5595190cdf371ef (and the rest of the queue, for example, as in RHEL kernel) be a matter of interest for fixing the issue or there must be an another reason, since the problem was amplified by a patch which could add only some timing- (therefore racing-) conditions? Static configurations where all memory is populated not through DIMMs are not affected, so we are dealing only with the backend device memory accesses over high (> 512Mb of the populated base) mem:
-m 512,slots=31,maxmem=16384M -object memory-backend-ram,id=mem0,size=512M -device pc-dimm,id=dimm0,node=0,memdev=mem0 ...... Thanks!