On 02.03.21 17:21, David Hildenbrand wrote:
Similar to VFIO, vDPA will go ahead an map+pin all guest memory. Memory
that used to be discarded will get re-populated and if we
discard+re-access memory after mapping+pinning, the pages mapped into the
vDPA IOMMU will go out of sync with the actual pages mapped into the user
space page tables.
Set discarding of RAM broken such that:
- virtio-mem and vhost-vdpa run mutually exclusive
- virtio-balloon is inhibited and no memory discards will get issued
In the future, we might be able to support coordinated discarding of RAM
as used by virtio-mem and as planned for VFIO.
Cc: Jason Wang <jasow...@redhat.com>
Cc: Michael S. Tsirkin <m...@redhat.com>
Cc: Cindy Lu <l...@redhat.com>
Signed-off-by: David Hildenbrand <da...@redhat.com>
---
Note: I was not actually able to reproduce/test as I fail to get the
vdpa_sim/vdpa_sim_net running on upstream Linux (whetever vdpa, vhost_vdpa,
vdpa_sim, vdpa_sim_net modules I probe, and in which order, no vdpa devices
appear under /sys/bus/vdpa/devices/ or /dev/).
---
hw/virtio/vhost-vdpa.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 01d2101d09..86058d4041 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -278,6 +278,17 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void
*opaque)
uint64_t features;
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
trace_vhost_vdpa_init(dev, opaque);
+ int ret;
+
+ /*
+ * Similar to VFIO, we end up pinning all guest memory and have to
+ * disable discarding of RAM.
+ */
+ ret = ram_block_discard_disable(true);
+ if (ret) {
+ error_report("Cannot set discarding of RAM broken");
+ return ret;
+ }
v = opaque;
v->dev = dev;
@@ -302,6 +313,8 @@ static int vhost_vdpa_cleanup(struct vhost_dev *dev)
memory_listener_unregister(&v->listener);
dev->opaque = NULL;
+ ram_block_discard_disable(false);
+
return 0;
}
@MST, do you have this on your radar? thanks
--
Thanks,
David / dhildenb