On 1/8/2026 10:46 AM, Michael Kelley wrote:
> From: Yu Zhang <[email protected]> Sent: Monday, December 8, 2025 
> 9:11 PM
>>
>> From: Easwar Hariharan <[email protected]>
>>
>> Hyper-V uses a logical device ID to identify a PCI endpoint device for
>> child partitions. This ID will also be required for future hypercalls
>> used by the Hyper-V IOMMU driver.
>>
>> Refactor the logic for building this logical device ID into a standalone
>> helper function and export the interface for wider use.
>>
>> Signed-off-by: Easwar Hariharan <[email protected]>
>> Signed-off-by: Yu Zhang <[email protected]>
>> ---
>>  drivers/pci/controller/pci-hyperv.c | 28 ++++++++++++++++++++--------
>>  include/asm-generic/mshyperv.h      |  2 ++
>>  2 files changed, 22 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/pci/controller/pci-hyperv.c 
>> b/drivers/pci/controller/pci-hyperv.c
>> index 146b43981b27..4b82e06b5d93 100644
>> --- a/drivers/pci/controller/pci-hyperv.c
>> +++ b/drivers/pci/controller/pci-hyperv.c
>> @@ -598,15 +598,31 @@ static unsigned int hv_msi_get_int_vector(struct 
>> irq_data *data)
>>
>>  #define hv_msi_prepare              pci_msi_prepare
>>
>> +/**
>> + * Build a "Device Logical ID" out of this PCI bus's instance GUID and the
>> + * function number of the device.
>> + */
>> +u64 hv_build_logical_dev_id(struct pci_dev *pdev)
>> +{
>> +    struct pci_bus *pbus = pdev->bus;
>> +    struct hv_pcibus_device *hbus = container_of(pbus->sysdata,
>> +                                            struct hv_pcibus_device, 
>> sysdata);
>> +
>> +    return (u64)((hbus->hdev->dev_instance.b[5] << 24) |
>> +                 (hbus->hdev->dev_instance.b[4] << 16) |
>> +                 (hbus->hdev->dev_instance.b[7] << 8)  |
>> +                 (hbus->hdev->dev_instance.b[6] & 0xf8) |
>> +                 PCI_FUNC(pdev->devfn));
>> +}
>> +EXPORT_SYMBOL_GPL(hv_build_logical_dev_id);
> 
> This change is fine for hv_irq_retarget_interrupt(), it doesn't help for the
> new IOMMU driver because pci-hyperv.c can (and often is) built as a module.
> The new Hyper-V IOMMU driver in this patch series is built-in, and so it can't
> use this symbol in that case -- you'll get a link error on vmlinux when 
> building
> the kernel. Requiring pci-hyperv.c to *not* be built as a module would also
> require that the VMBus driver not be built as a module, so I don't think 
> that's
> the right solution.
> 
> This is a messy problem. The new IOMMU driver needs to start with a generic
> "struct device" for the PCI device, and somehow find the corresponding VMBus
> PCI pass-thru device from which it can get the VMBus instance ID. I'm thinking
> about ways to do this that don't depend on code and data structures that are
> private to the pci-hyperv.c driver, and will follow-up if I have a good 
> suggestion.

Thank you, Michael. FWIW, I did try to pull out the device ID components out of 
pci-hyperv into include/linux/hyperv.h and/or a new include/linux/pci-hyperv.h
but it was just too messy as you say.

> I was wondering if this "logical device id" is actually parsed by the 
> hypervisor,
> or whether it is just a unique ID that is opaque to the hypervisor. From the
> usage in the hypercalls in pci-hyperv.c and this new IOMMU driver, it appears
> to be the former. Evidently the hypervisor is taking this logical device ID 
> and
> and matching against bytes 4 thru 7 of the instance GUIDs of PCI pass-thru
> devices offered to the guest, so as to identify a particular PCI pass-thru 
> device.
> If that's the case, then Linux doesn't have the option of choosing some other
> unique ID that is easier to generate and access. 

Yes, the device ID is actually used by the hypervisor to find the corresponding 
PCI
pass-thru device and the physical IOMMUs the device is behind and execute the
requested operation for those IOMMUs.

> There's a uniqueness issue with this kind of logical device ID that has been
> around for years, but I had never thought about before. In hv_pci_probe()
> instance GUID bytes 4 and 5 are used to generate the PCI domain number for
> the "fake" PCI bus that the PCI pass-thru device resides on. The issue is the
> lack of guaranteed uniqueness of bytes 4 and 5, so there's code to deal with
> a collision. (The full GUID is unique, but not necessarily some subset of the
> GUID.) It seems like the same kind of uniqueness issue could occur here. Does
> the Hyper-V host provide any guarantees about the uniqueness of bytes 4 thru
> 7 as a unit, and if not, what happens if there is a collision? Again, this
> uniqueness issue has existed for years, so it's not new to this patch set, but
> with new uses of the logical device ID, it seems relevant to consider.
 
Thank you for bringing that up, I was aware of the uniqueness workaround but, 
like you,
I had not considered that the workaround could prevent matching the device ID 
with the
record the hypervisor has of the PCI pass-thru device assigned to us. I will 
work with
the hypervisor folks to resolve this before this patch series is posted for 
merge.

Thanks,
Easwar (he/him)

Reply via email to