Hi Experts: Sorry to disturb you.
I failed to find any real data about vfio interrupt performance in community, so send mail to you boldly. We have a pcie device work on x86 platform, and no VM in our env, I plan to replace the kernel side device driver with vfio framework, reimplement it in user space after enable vfio/vfio_pci/vfio_iommu_type1 in kernel. The original intention is just to get rid of the dependents to kernel, let our application which need to access our pcie device to be a pure application, let it can run on other linux distribution(no custom kernel driver need). Our pcie device have the following character: 1, have a great deal of interrupt when working 2, and also have high demand to interrupt’s processing speed. 3, it will need to access almost all bar space after mapping. Here want to check with you, compare with previous kernel side device driver, if there are huge decrease for interrupt’s processing speed when the interrupt numbers are huge in short time? How about your comments to my this attemptation, if it’s valueble to move driver to userspace in this kind of situation(no vm, huge interrupt numbers etc..). BTW, I found there are some random issue when using vfio in community, such as: 1, Some device’s extend configuration space will have problem when accessing by random. 2, When try to access the device’s space which in the same iommu groups at the same time, it will trigger issue by random. If this kind of issue have relation with IOMMU’s hardware limitation, or if we can bypass it via some method for now? Many thanks for your times!! Best regards! James
_______________________________________________ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users