On Fri, Mar 13, 2020 at 12:31:22PM -0400, Peter Xu wrote: > On Fri, Mar 13, 2020 at 11:29:59AM -0400, Michael S. Tsirkin wrote: > > On Fri, Mar 13, 2020 at 01:44:46PM +0100, Halil Pasic wrote: > > > [..] > > > > > > > > > > CCing Tom. @Tom does vhost-vsock work for you with SEV and current > > > > > qemu? > > > > > > > > > > Also, one can specify iommu_platform=on on a device that ain't a part > > > > > of > > > > > a secure-capable VM, just for the fun of it. And that breaks > > > > > vhost-vsock. Or is setting iommu_platform=on only valid if > > > > > qemu-system-s390x is protected virtualization capable? > > > > > > > > > > BTW, I don't have a strong opinion on the fixes tag. We currently do > > > > > not > > > > > recommend setting iommu_platform, and thus I don't think we care too > > > > > much about past qemus having problems with it. > > > > > > > > > > Regards, > > > > > Halil > > > > > > > > > > > > Let's just say if we do have a Fixes: tag we want to set it correctly to > > > > the commit that needs this fix. > > > > > > > > > > I finally did some digging regarding the performance degradation. For > > > s390x the performance degradation on vhost-net was introduced by commit > > > 076a93d797 ("exec: simplify address_space_get_iotlb_entry"). Before > > > IOMMUTLBEntry.addr_mask used to be based on plen, which in turn was > > > calculated as the rest of the memory regions size (from address), and > > > covered most of the guest address space. That is we didn't have a whole > > > lot of IOTLB API overhead. > > > > > > With commit 076a93d797 I see IOMMUTLBEntry.addr_mask == 0xfff which comes > > > as ~TARGET_PAGE_MASK from flatview_do_translate(). To have things working > > > properly I applied 75e5b70e6, b021d1c044, and d542800d1e on the level of > > > 076a93d797 and 076a93d797~1. > > > > Peter, what's your take on this one? > > Commit 076a93d797 was one of the patchset where we want to provide > sensible IOTLB entries and also that should start to work with huge > pages.
So the issue bundamentally is that it never produces entries larger than page size. Wasteful even just with huge pages, all the more so which passthrough which could have giga-byte entries. Want to try fixing that? > Frankly speaking after a few years I forgot the original > motivation of that whole thing, but IIRC there's a patch that was > trying to speedup especially for vhost but I noticed it's not merged: > > https://lists.gnu.org/archive/html/qemu-devel/2017-06/msg00574.html > > Regarding to the current patch, I'm not sure I understand it > correctly, but is that performance issue only happens when (1) there's > no intel-iommu device, and (2) there is iommu_platform=on specified > for the vhost backend? > > If so, I'd confess I am not too surprised if this fails the boot with > vhost-vsock because after all we speicified iommu_platform=on > explicitly in the cmdline, so if we want it to work we can simply > remove that iommu_platform=on when vhost-vsock doesn't support it > yet... I thougth iommu_platform=on was added for that case - when we > want to force IOMMU to be enabled from host side, and it should always > be used with a vIOMMU device. > > However I also agree that from performance POV this patch helps for > this quite special case. > > Thanks, > > -- > Peter Xu