Hi Linu, On 05/10/2017 14:13, Auger Eric wrote: > Hi Linu, > > On 05/10/2017 13:54, Auger Eric wrote: >> Hi Linu, >> On 05/10/2017 12:46, Auger Eric wrote: >>> Hi Linu, >>> On 04/10/2017 13:49, Linu Cherian wrote: >>>> Hi Eric, >>>> >>>> >>>> On Wed Sep 27, 2017 at 11:24:01AM +0200, Auger Eric wrote: >>>>> Hi Linu, >>>>> >>>>> On 27/09/2017 11:21, Linu Cherian wrote: >>>>>> On Wed Sep 27, 2017 at 10:55:07AM +0200, Auger Eric wrote: >>>>>>> Hi Linu, >>>>>>> >>>>>>> On 27/09/2017 10:30, Bharat Bhushan wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>>> -----Original Message----- >>>>>>>>> From: Linu Cherian [mailto:linuc.dec...@gmail.com] >>>>>>>>> Sent: Wednesday, September 27, 2017 1:11 PM >>>>>>>>> To: Bharat Bhushan <bharat.bhus...@nxp.com> >>>>>>>>> Cc: eric.au...@redhat.com; eric.auger....@gmail.com; >>>>>>>>> peter.mayd...@linaro.org; alex.william...@redhat.com; m...@redhat.com; >>>>>>>>> qemu-...@nongnu.org; qemu-devel@nongnu.org; kevin.t...@intel.com; >>>>>>>>> marc.zyng...@arm.com; t...@semihalf.com; will.dea...@arm.com; >>>>>>>>> drjo...@redhat.com; robin.mur...@arm.com; christoffer.d...@linaro.org; >>>>>>>>> bharatb.ya...@gmail.com >>>>>>>>> Subject: Re: [Qemu-arm] [PATCH v4 0/5] virtio-iommu: VFIO integration >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> On Wed Sep 27, 2017 at 12:03:15PM +0530, Bharat Bhushan wrote: >>>>>>>>>> This patch series integrates VFIO/VHOST with virtio-iommu. >>>>>>>>>> >>>>>>>>>> This version is mainly about rebasing on v4 version on virtio-iommu >>>>>>>>>> device framework from Eric Augur and addresing review comments. >>>>>>>>>> >>>>>>>>>> This patch series allows PCI pass-through using virtio-iommu. >>>>>>>>>> >>>>>>>>>> This series is based on: >>>>>>>>>> - virtio-iommu kernel driver by Jean-Philippe Brucker >>>>>>>>>> [1] [RFC] virtio-iommu version 0.4 >>>>>>>>>> git://linux-arm.org/virtio-iommu.git branch viommu/v0.4 >>>>>>> >>>>>>> Just to make sure, do you use the v0.4 virtio-iommu driver from above >>>>>>> branch? >>>>>>> >>>>>>> Thanks >>>>>> >>>>>> I am using git://linux-arm.org/linux-jpb.git branch virtio-iommu/v0.4. >>>>>> Hope you are referring to the same. >>>>> >>>>> Yes that's the right one. I will also investigate on my side this >>>>> afternoon. >>>>> >>>>> Thanks >>>>> >>>>> Eric >>>> >>>> With the below workaround, atleast ping works for me. >>>> >>>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c >>>> index 249964a..2904617 100644 >>>> --- a/drivers/iommu/virtio-iommu.c >>>> +++ b/drivers/iommu/virtio-iommu.c >>>> .attach_dev = viommu_attach_dev, >>>> .map = viommu_map, >>>> .unmap = viommu_unmap, >>>> - .map_sg = viommu_map_sg, >>>> + .map_sg = default_iommu_map_sg, >>>> .iova_to_phys = viommu_iova_to_phys, >>>> .add_device = viommu_add_device, >>>> .remove_device = viommu_remove_device, >>>> >>>> >>>> Looks like the qemu backend doesnt have support to handle the map requests >>>> from >>>> virtio_iommu_map_sg, since it merges multiple map requests into one with >>>> mapsize larger than page size(for eg. 0x5000). >>> On my side I understand viommu_map_sg builds a VIRTIO_IOMMU_T_MAP >>> request for each sg element. The map size matches the sg element size. >>> Then each request is sent separately in _viommu_send_reqs_sync. I don't >>> see any concatenation. Looks Jean has a plan to check if it can >>> concatenate anything (/* TODO: merge physically-contiguous mappings if >>> any */) but this is not implemented yet. >> >> Hopefully I was just able to reproduce your issue with an igb device. I >> keep on debugging... >> >> vfio_get_vaddr 1 len=0x3000 iotlb->addr_mask=0x2fff >> qemu-system-aarch64: iommu has granularity incompatible with target AS >> >> >> Thanks >> >> Eric >>> >>> However you should be allowed to map 1 sg element of 5 pages and then >>> notify the host about this event I think. Still looking at the code... >>> >>> I still can't reproduce the issue at the moment. What kind of device are >>> you assigning? >>> >>> Thanks >>> >>> Eric >>>> >>>> Atleast vfio_get_vaddr called from vfio_iommu_map_notify in Qemu expects >>>> the map size to be a power of 2. > > Actually I missed the most important here ;-) >>>> >>>> if (len & iotlb->addr_mask) { > This check looks suspiscious to me. In our case the len is not modified > by the previous translation and it fails, I don't see why. It should be > valid to be able to notify 5 granules.
So after discussion with Alex, looks the way we notify the host currently is wrong. we set the addr_mask to the mapping/unmapping size -1 whereas this should be a page mask instead (granule size or block size?). So if the guest maps 5 x 4kB pages we should send 5 notifications for each page and not a single one. It is unclear to me if we can notify with hugepage/block page size mask. Peter may confirm/infirm this. in vsmmuv3 code I notify by granule or block size. Bharat, please can you add this to your TODO list? Linu, thanks a lot for the time you spent debugging this issue. Curiously on my side, it is really seldom hit but it is ... Thanks! Eric > > Thanks > > Eric >>>> error_report("iommu has granularity incompatible with target AS"); >>>> return false; >>>> } >>>> >>>> Just trying to understand how this is not hitting in your case. >>>> >>>> >>> >> >