On 26.02.2025 22:11, Jason Andryuk wrote: > Sometimes we have to quirk the PCI IRTE to use a non-zero remap_index > corresponding to the guest's view of the MSI data register. The MSI > data guest vector equals interrupt remapping table index. > > The ath11k wifi device does unusual things with MSIs. The driver lets > Linux program the MSI capability. Linux internally caches the MSI data > it thinks it programmed. It sets its affinity to CPU0. The ath11k > driver then reads the MSI address from the PCI configuration space. The > MSI address and cached data are then passed to other components on the > same card to generate MSI interrupts.
I'm curious whether it's known how e.g. KVM deals with this. > With Xen, vPCI and QEMU PCI passthrough have a guest idea of the MSI > address and data. But Xen programs the actual hardware with its own > address and data. With per-device IRT, xen uses index 0. When the > ath11k driver passes the guest address and data to the hardware, it > generates faults when there is no IRTE for the guest data (~0x25). > > To work around this, we can, for per-device IRTs, program the hardware > to use the guest data & associated IRTE. The address doesn't matter > since the IRTE handles that, and the Xen address & vector can be used as > expected. > > For vPCI, the guest MSI data is available at the time of initial MSI > setup, but that is not the case for HVM. With HVM, the initial MSI > setup is done when PHYSDEVOP_map_pirq is called, but the guest vector is > only available later when XEN_DOMCTL_bind_pt_irq is called. In that > case, we need to tear down and create a new IRTE. This later location > can also handle vPCI. > > Add pirq_guest_bind_gvec to plumb down the gvec without modifying all > call sites. Use msi_desc->gvec to pass through the desired value. > > Only tested with AMD-Vi. Requires per-device IRT. With AMD-Vi, the > number of MSIs is passed in, but a minimum of a page is allocated for > the table. The vector is 8 bits giving indices 0-255. Even with 128bit > IRTEs, 16 bytes, 1 page 4096 / 16 = 256 entries, so we don't have to > worry about overflow. N MSIs can only have the last one at 255, so the > guest can't expect to have N vectors starting above 255 - N. > > Signed-off-by: Xenia Ragiadakou <xenia.ragiada...@amd.com> > Signed-off-by: Jason Andryuk <jason.andr...@amd.com> Just to clarify: Who's the original patch author? The common expectation is that the first S-o-b: matches From:. > --- > Is something like this feasible for inclusion upstream? I'm asking > before I look into what it would take to support Intel. Well, I wouldn't outright say "no". It needs to be pretty clear that this doesn't put at risk the "normal" cases. Which is going to be somewhat difficult considering how convoluted the involved code (sadly) is. See also the commentary related remark at the very bottom. > e.g. Replace amd_iommu_perdev_intremap with something generic. > > The ath11k device supports and tries to enable 32 MSIs. Linux in PVH > dom0 and HVM domU fails enabling 32 and falls back to just 1, so that is > all that has been tested. > > Using msi_desc->gvec should be okay since with posted interrupts - the > gvec is expected to match. > > hvm_pi_update_irte() changes the IRTE but not the MSI data in the PCI > capability, so that isn't suitable by itself. These last two paragraphs look to again be related to the VT-d aspect. Yet there's the middle one which apparently doesn't, hence I'm uncertain I read all of this as it's intended. > --- a/xen/drivers/passthrough/amd/iommu_intr.c > +++ b/xen/drivers/passthrough/amd/iommu_intr.c > @@ -543,6 +543,31 @@ int cf_check amd_iommu_msi_msg_update_ire( > if ( !msg ) > return 0; > > + if ( pdev->gvec_as_irte_idx && amd_iommu_perdev_intremap ) > + { > + int new_remap_index = 0; > + if ( msi_desc->gvec ) > + { > + printk("%pp: gvec remap_index %#x -> %#x\n", &pdev->sbdf, > + msi_desc->remap_index, msi_desc->gvec); > + new_remap_index = msi_desc->gvec; > + } > + > + if ( new_remap_index && new_remap_index != msi_desc->remap_index && > + msi_desc->remap_index != -1 ) > + { > + /* Clear any existing entries */ > + update_intremap_entry_from_msi_msg(iommu, bdf, nr, > + &msi_desc->remap_index, > + NULL, NULL); > + > + for ( i = 0; i < nr; ++i ) > + msi_desc[i].remap_index = -1; > + > + msi_desc->remap_index = new_remap_index; You zap nr entries, and then set only 1? Doesn't the zapping loop need to instead be a setting one? Perhaps with a check up front that the last value used will still fit in 8 bits? Or else make applying the quirk conditional upon nr == 1? > --- a/xen/drivers/passthrough/pci.c > +++ b/xen/drivers/passthrough/pci.c > @@ -306,6 +306,17 @@ static void apply_quirks(struct pci_dev *pdev) > { PCI_VENDOR_ID_INTEL, 0x6fa0 }, > { PCI_VENDOR_ID_INTEL, 0x6fc0 }, > }; > + static const struct { > + uint16_t vendor, device; > + } hide_irt[] = { > +#define PCI_VENDOR_ID_QCOM 0x17cb At least this wants to move into xen/pci_ids.h. > +#define QCA6390_DEVICE_ID 0x1101 > +#define QCN9074_DEVICE_ID 0x1104 > +#define WCN6855_DEVICE_ID 0x1103 > + { PCI_VENDOR_ID_QCOM, QCA6390_DEVICE_ID }, > + { PCI_VENDOR_ID_QCOM, QCN9074_DEVICE_ID }, > + { PCI_VENDOR_ID_QCOM, WCN6855_DEVICE_ID }, > + }; May I ask what's the source of information on which specific devices are affected by this anomalous behavior? Just the Linux driver? I'm also uncertain #define-s are very useful in such a case. Raw hex numbers in the table with a comment indicating the device name ought to be as fine. > --- a/xen/include/xen/pci.h > +++ b/xen/include/xen/pci.h > @@ -127,6 +127,8 @@ struct pci_dev { > /* Device with errata, ignore the BARs. */ > bool ignore_bars; > > + bool gvec_as_irte_idx; > + > /* Device misbehaving, prevent assigning it to guests. */ > bool broken; > Overall more commentary would be needed throughout the patch. This field is just one example where some minimal explanation is missing. Jan