W dniu 28.05.2024 o 16:31, Ard Biesheuvel pisze:
I would expect each host bridge to have its own separate resource
windows for config space, buses and MMIO regions.
So each host bridge gets a different segment number, and each segment
is associated with a different ECAM region. That also means the bus
range can start at 0x0 for each segment, as they are completely
disjoint.
This is a more accurate representation of the physical topology, given
that each host bridge has its own link to the CPU side interconnect,
and so things like peer-to-peer DMA between endpoints does not
generally work unless the endpoints share a segment, especially in the
presence of SMMUs.
OK. I have to admit that I never checked how physical NUMA system
handles PCI Express. The code in patches was done by comparing with
other QEMU targets.
To make PCIe in a way you describe we probably need to go to QEMU devel
ML and discuss how it can be done there. Or I did not got deep enough
into PCIe world to notice how to make it happen with current implementation.
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#119442): https://edk2.groups.io/g/devel/message/119442
Mute This Topic: https://groups.io/mt/106345969/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-