On 29 January 2015 at 14:37, Alexander Graf <ag...@suse.de> wrote: > On 29.01.15 15:34, Peter Maydell wrote: >> I kind of see, but isn't this just a window from CPU address >> space into PCI address space, not vice-versa? > > Yup, exactly. But PCI devices need to map themselves somewhere into the > PCI address space. So if I configure a BAR to live at 0x10000000, it > should also show up at 0x10000000 when accessed from the CPU. That's > what the mapping above is about.
No, it doesn't have to. It's a choice to make the mapping be such that the system address for a BAR matches the address in PCI memory space, not a requirement. I agree it's a sensible choice, though. But as I say, this code is setting up one mapping (the system address -> PCI space mapping), not two. >> DMA by PCI devices bus-mastering into system memory must be >> being set up elsewhere, I think. > > Yes, that's a different mechanism that's not implemented yet for GPEX > :). We can't not implement DMA, it would break lots of the usual PCI devices people want to use. In fact I thought the PCI core code implemented a default of "DMA by PCI devices goes to the system address space" if you didn't specifically set up something else by calling pci_setup_iommu(). This is definitely how it works for plain PCI host bridges, are PCIe bridges different? > On ARM this would happen via SMMU emulation. There's no requirement for a PCI host controller to be sat behind an SMMU -- that's a system design choice. We don't need to implement the SMMU yet (or perhaps ever?); we definitely need to support PCI DMA. -- PMM