Thanks Jonathan for the review. > As per reply to the cover letter I definitely want to see SRAT table dumps > in here though so we can easily see what this is actually building.
Ack. > I worry that some OS might make the assumption that it's one GI node > per PCI device though. The language in the ACPI specification is: > > "The Generic Initiator Affinity Structure provides the association between _a_ > generic initiator and _the_ proximity domain to which the initiator belongs". > > The use of _a_ and _the_ in there makes it pretty explicitly a N:1 > relationship > (multiple devices can be in same proximity domain, but a device may only be > in one). > To avoid that confusion you will need an ACPI spec change. I'd be happy to > support Yeah, that's a good point. It won't hurt to make the spec change to make the possibility of the association between a device with multiple domains. > The reason you can get away with this in Linux today is that I only > implemented > a very minimal support for GIs with the mappings being provided the other way > around (_PXM in a PCIe node in DSDT). If we finish that support off I'd > assume Not sure if I understand this. Can you provide a reference to this DSDT related change? > Also, this effectively creates a bunch of separate generic initiator nodes > and lumping that under one object seems to imply they are in general connected > to each other. > > I'd be happier with a separate instance per GI node > > -object acpi-generic-initiator,id=gi1,pci-dev=dev1,nodeid=10 > -object acpi-generic-initiator,id=gi2,pci-dev=dev1,nodeid=11 > etc with the proviso that anyone using this on a system that assumes a one > to one mapping for PCI > > However, I'll leave it up to those more familiar with the QEMU numa > control interface design to comment on whether this approach is preferable > to making the gi part of the numa node entry or doing it like hmat. > -numa srat-gi,node-id=10,gi-pci-dev=dev1 The current way of acpi-generic-initiator object usage came out of the discussion on v1 to essentially link all the device NUMA nodes to the device. (https://lore.kernel.org/all/20230926131427.1e441670.alex.william...@redhat.com/) Can Alex or David comment on which is preferable (the current mechanism vs 1:1 mapping per object as suggested by Jonathan)?