> On Jun 6, 2022, at 2:32 PM, Alexander Duyck <alexander.du...@gmail.com> wrote: > > On Tue, May 24, 2022 at 9:11 AM Jagannathan Raman <jag.ra...@oracle.com> > wrote: >> >> Forward remote device's interrupts to the guest >> >> Signed-off-by: Elena Ufimtseva <elena.ufimts...@oracle.com> >> Signed-off-by: John G Johnson <john.g.john...@oracle.com> >> Signed-off-by: Jagannathan Raman <jag.ra...@oracle.com> >> --- >> include/hw/pci/pci.h | 13 ++++ >> include/hw/remote/vfio-user-obj.h | 6 ++ >> hw/pci/msi.c | 16 ++-- >> hw/pci/msix.c | 10 ++- >> hw/pci/pci.c | 13 ++++ >> hw/remote/machine.c | 14 +++- >> hw/remote/vfio-user-obj.c | 123 ++++++++++++++++++++++++++++++ >> stubs/vfio-user-obj.c | 6 ++ >> MAINTAINERS | 1 + >> hw/remote/trace-events | 1 + >> stubs/meson.build | 1 + >> 11 files changed, 193 insertions(+), 11 deletions(-) >> create mode 100644 include/hw/remote/vfio-user-obj.h >> create mode 100644 stubs/vfio-user-obj.c >> > > So I had a question about a few bits below. Specifically I ran into > issues when I had setup two devices to be assigned to the same VM via > two vfio-user-pci/x-vfio-user-server interfaces.
Thanks for the heads up, Alexander! > > What I am hitting is an assert(irq_num < bus->nirq) in > pci_bus_change_irq_level in the server. That is issue is because we are only initializing only one IRQ for the PCI bus - but it should be more. We will update it and when the bus initializes interrupts and get back to you. We only tested MSI/x devices for the multi-device config. We should also test INTx - could you please confirm which device you’re using so we could verify that it works before posting the next revision. Thank you! -- Jag > >> diff --git a/hw/remote/machine.c b/hw/remote/machine.c >> index 645b54343d..75d550daae 100644 >> --- a/hw/remote/machine.c >> +++ b/hw/remote/machine.c >> @@ -23,6 +23,8 @@ >> #include "hw/remote/iommu.h" >> #include "hw/qdev-core.h" >> #include "hw/remote/iommu.h" >> +#include "hw/remote/vfio-user-obj.h" >> +#include "hw/pci/msi.h" >> >> static void remote_machine_init(MachineState *machine) >> { >> @@ -54,12 +56,16 @@ static void remote_machine_init(MachineState *machine) >> >> if (s->vfio_user) { >> remote_iommu_setup(pci_host->bus); >> - } >> >> - remote_iohub_init(&s->iohub); >> + msi_nonbroken = true; >> + >> + vfu_object_set_bus_irq(pci_host->bus); >> + } else { >> + remote_iohub_init(&s->iohub); >> >> - pci_bus_irqs(pci_host->bus, remote_iohub_set_irq, remote_iohub_map_irq, >> - &s->iohub, REMOTE_IOHUB_NB_PIRQS); >> + pci_bus_irqs(pci_host->bus, remote_iohub_set_irq, >> remote_iohub_map_irq, >> + &s->iohub, REMOTE_IOHUB_NB_PIRQS); >> + } >> >> qbus_set_hotplug_handler(BUS(pci_host->bus), OBJECT(s)); >> } > > If I am reading the code right this limits us to one legacy interrupt > in the vfio_user case, irq 0, correct? Is this intentional? Just > wanted to verify as this seems to limit us to supporting only one > device based on the mapping below. > >> diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c >> index ee28a93782..eeb165a805 100644 >> --- a/hw/remote/vfio-user-obj.c >> +++ b/hw/remote/vfio-user-obj.c >> @@ -53,6 +53,9 @@ >> #include "hw/pci/pci.h" >> #include "qemu/timer.h" >> #include "exec/memory.h" >> +#include "hw/pci/msi.h" >> +#include "hw/pci/msix.h" >> +#include "hw/remote/vfio-user-obj.h" >> >> #define TYPE_VFU_OBJECT "x-vfio-user-server" >> OBJECT_DECLARE_TYPE(VfuObject, VfuObjectClass, VFU_OBJECT) >> @@ -96,6 +99,10 @@ struct VfuObject { >> Error *unplug_blocker; >> >> int vfu_poll_fd; >> + >> + MSITriggerFunc *default_msi_trigger; >> + MSIPrepareMessageFunc *default_msi_prepare_message; >> + MSIxPrepareMessageFunc *default_msix_prepare_message; >> }; >> >> static void vfu_object_init_ctx(VfuObject *o, Error **errp); >> @@ -520,6 +527,111 @@ static void vfu_object_register_bars(vfu_ctx_t >> *vfu_ctx, PCIDevice *pdev) >> } >> } >> >> +static int vfu_object_map_irq(PCIDevice *pci_dev, int intx) >> +{ >> + int pci_bdf = PCI_BUILD_BDF(pci_bus_num(pci_get_bus(pci_dev)), >> + pci_dev->devfn); >> + >> + return pci_bdf; >> +} >> + > > This bit ends up mapping it so that the BDF ends up setting the IRQ > number. So for example device 0, function 0 will be IRQ 0, and device > 1, function 0 will be IRQ 8. Just wondering why it is implemented this > way if we only intend to support one device. Also I am wondering if we > should support some sort of IRQ sharing? > >> +static int vfu_object_setup_irqs(VfuObject *o, PCIDevice *pci_dev) >> +{ >> + vfu_ctx_t *vfu_ctx = o->vfu_ctx; >> + int ret; >> + >> + ret = vfu_setup_device_nr_irqs(vfu_ctx, VFU_DEV_INTX_IRQ, 1); >> + if (ret < 0) { >> + return ret; >> + } >> + >> + if (msix_nr_vectors_allocated(pci_dev)) { >> + ret = vfu_setup_device_nr_irqs(vfu_ctx, VFU_DEV_MSIX_IRQ, >> + msix_nr_vectors_allocated(pci_dev)); >> + } else if (msi_nr_vectors_allocated(pci_dev)) { >> + ret = vfu_setup_device_nr_irqs(vfu_ctx, VFU_DEV_MSI_IRQ, >> + msi_nr_vectors_allocated(pci_dev)); >> + } >> + >> + if (ret < 0) { >> + return ret; >> + } >> + >> + vfu_object_setup_msi_cbs(o); >> + >> + pci_dev->irq_opaque = vfu_ctx; >> + >> + return 0; >> +} >> + >> +void vfu_object_set_bus_irq(PCIBus *pci_bus) >> +{ >> + pci_bus_irqs(pci_bus, vfu_object_set_irq, vfu_object_map_irq, pci_bus, >> 1); >> +} >> + >> /* >> * TYPE_VFU_OBJECT depends on the availability of the 'socket' and 'device' >> * properties. It also depends on devices instantiated in QEMU. These > > So this is the code that was called earlier that is being used to > assign 1 interrupt to the bus. I am just wondering if that is > intentional and if the expected behavior is to only support one device > per server for now?