On Thu, Jan 25, 2018 at 03:07:23PM +0100, Paolo Bonzini wrote:
> On 23/01/2018 17:07, Michael S. Tsirkin wrote:
> >> It's not clear to me how to do this. E.g need a way to report failure to 
> >> VM2
> >> or #PF?
> > 
> > Why would there be a failure? qemu running vm1 would be responsible for
> > preventing access to vm2's memory not mapped through an IOMMU.
> > Basically munmap these.
> 
> Access to VM2's memory would use VM2's configured IOVAs for VM1's
> requester id.  VM2's QEMU send device IOTLB messages to VM1's QEMU,
> which would remap VM2's memory on the fly into VM1's BAR2.

Right. Almost.

One problem is that IOVA range is bigger than RAM range,
so you will have trouble making arbitrary virtual addresses
fit in a BAR.

This is why I suggested a hybrid approach where
translation happens within guest, qemu only does protection.

Another problem with it is that IOMMU has page granularity
while with hugetlbfs we might not be able to remap at that
granularity.

Not sure what to do about it - teach host to break
up pages? Pass limitation to guest through virtio-iommu?

Ideas?

> It's not trivial to do it efficiently, but it's possible.  The important
> thing is that, if VM2 has an IOMMU, QEMU must *not* connect to a
> virtio-vhost-user device that lacks IOTLB support.  But that would be a
> vhost-user bug, not a virtio-vhost-user bug---and that's the beauty of
> Stefan's approach. :)
> 
> Paolo

Reply via email to