On 27/04/2021 18:31, Andrew Fish via groups.io wrote:
One trick people have pulled in the past is to write a driver that produces a “fake” PCI IO Protocol. The “fake” PCI IO driver abstracts how the MMIO device shows up on the platform. This works well if the MMIO device is really the same IP block as a PCI device. This usually maps to the PCI BAR being the same thing as the magic MMIO range. The “fake” PCI IO Protocol also abstracts platform specific DMA rules from the generic driver.
Slightly off-topic, but I've always been curious about this: given that the entire purpose of PCI BARs is to allow for the use of straightforward MMIO operations, in which standard CPU read/write instructions can be used to access device registers with zero overhead and no possible error conditions, why do the EFI_PCI_IO_PROTOCOL.Mem.Read (and related) abstractions exist? They seem to add a lot of complexity for negative benefit, and I'd be interested to know if there was some reason why the design was chosen.
Thanks, Michael -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#74504): https://edk2.groups.io/g/devel/message/74504 Mute This Topic: https://groups.io/mt/81516685/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-