Hi Shameer, Nicolin,

On 3/25/25 7:26 PM, Nicolin Chen wrote:
> On Tue, Mar 25, 2025 at 03:43:29PM +0000, Shameerali Kolothum Thodi wrote:
>>> For the record I tested the series with host VFIO device and a
>>> virtio-blk-pci device put behind the same pxb-pcie/smmu protection and
>>> it works just fine
>>>
>>> -+-[0000:0a]-+-01.0-[0b]----00.0  Mellanox Technologies ConnectX Family
>>> mlx5Gen Virtual Function
>>>  |           \-01.1-[0c]----00.0  Red Hat, Inc. Virtio 1.0 block device
>>>  \-[0000:00]-+-00.0  Red Hat, Inc. QEMU PCIe Host bridge
>>>              +-01.0-[01]--
>>>              +-01.1-[02]--
>>>              \-02.0  Red Hat, Inc. QEMU PCIe Expander bridge
>>>
>>> This shows that without vcmdq feature there is no blocker having the
>>> same smmu device protecting both accelerated and emulated devices.
>> Thanks for giving it a spin. Yes, it currently supports the above. 
>>
>> At the moment we are not using the IOTLB for the emulated dev for a
>> config like above.  Have you checked performance for either emulated or
>> vfio dev with the above config? Whatever light tests I have done it shows
>> performance degradation for emulated dev compared to the default
>> SMMUv3(iommu=smmuv3). 
No I have not checked yet. Again I do not advocate for this kind of mix
but I wanted to check that it still works conceptually.

Thanks

Eric
>>
>> And if the emulated dev issues _TLBI_NH_ASID, the code currently will 
>> propagate
>> that down to host SMMUv3. This will affect the vfio dev as well.
> VA too. Only commands with an SID field can be simply excluded.
> I think we should be concerned that the underlying SMMU CMDQ HW
> has a very limited command executing power, so wasting command
> cycles doesn't feel very ideal as it could impact the host OS
> (and other VMs too).
>
> Thanks
> Nicolin
>


Reply via email to