On Sat, 20 Mar 2021 at 18:59, Michael S. Tsirkin <m...@redhat.com> wrote: > > On Fri, Mar 19, 2021 at 12:35:31PM +0000, Peter Maydell wrote: > > I'm looking at a bug reported against the QEMU arm virt board's pci-gpex > > PCI controller: https://bugs.launchpad.net/qemu/+bug/1918917 > > where an attempt to write to an address within the PCI IO window > > where the guest hasn't mapped a BAR causes a CPU exception rather than > > (what I believe is) the PCI-required behaviour of writes-ignored, reads > > return -1. > > > > What in the QEMU PCI code is responsible for giving the PCI-spec > > behaviour for accesses to the PCI IO and memory windows where there > > is no BAR? I was expecting the generic PCI code to map a background > > memory region over the whole window to do this, but it looks like it > > doesn't...
> As far as I know, at the PCI level what happens is Master Abort > on PCI/PCI-X and Unsupported Request on Express. > PCI spec says: > The host bus bridge, in PC compatible systems, must return all 1's on > a read transaction and > discard data on a write transaction when terminated with Master-Abort. > > We thus implement this per host e.g. on pc compatible systems by > calling pc_pci_as_mapping_init. Isn't pc_pci_as_mapping_init() "put the PCI space into the system address space", rather than "define the default behaviour for accesses in PCI space" ? IIRC x86 has -1/discard for everywhere, though, so maybe you get that without having to do anything special. Q: if PCI device A does a bus-mastering DMA read to a PCI address where no other device has been mapped, does the spec require it to (a) get back a "transaction failed" response or (b) get back read-data of -1 ? It sounds like the answer based on what you write above is (a), device A gets a Master Abort. (Put another way, is the -1/discard behaviour general to PCI transactions or is it strictly something that happens at the host bridge where the host bridge turns host CPU transactions into PCI transactions ?) If this is host-bridge specific then I guess our current implementation of "leave it up to the host bridge code" makes sense, but it also seems like a recipe for all our host bridges forgetting this corner case, in the absence of support from the common code for making it easy/the default... Anyway, I think that for hw/pci-host/gpex.c we would need to change the current memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX); [...] sysbus_init_mmio(sbd, &s->io_mmio); [...] pci->bus = pci_register_root_bus(dev, "pcie.0", gpex_set_irq, pci_swizzle_map_irq_fn, s, &s->io_mmio, &s->io_ioport, 0, 4, TYPE_PCIE_BUS); to also create a container MR with a background set of io read/write functions to give the -1/discard behaviour, map s->io_mmio into that container, and return the container as the sysbus MMIO region. (and same again for the IO window). thanks -- PMM