On Sun, Feb 28, 2010 at 10:39:21PM +0000, Paul Brook wrote: > > I'm sympathetic to your arguments though. As qemu is today, the above > > is definitely the right thing to do. But ram is always ram and ram > > always has a fixed (albeit non-linear) mapping within a guest. > > I think this assumption is unsafe. There are machines where RAM mappings can > change. It's not uncommon for a chip select (i.e. physical memory address > region) to be switchable to several different sources, one of which may be > RAM. I'm pretty sure this functionality is present (but not actually > implemented) on some of the current qemu targets. > > I agree that changing RAM mappings under an active DMA is a fairly suspect > thing to do. However I think we need to avoid cache mappings between separate > DMA transactions i.e. when the guest can know that no DMA will occur, and > safely remap things. > > I'm also of the opinion that virtio devices should behave the same as any > other device. i.e. if you put a virtio-net-pci device on a PCI bus behind an > IOMMU, then it should see the same address space as any other PCI device in > that location.
It already doesn't. virtio passes physical memory addresses to device instead of DMA addresses. > Apart from anything else, failure to do this breaks nested > virtualization. Assigning PV device in nested virtualization? It could work, but not sure what the point would be. > While qemu doesn't currently implement an IOMMU, the DMA > interfaces have been designed to allow it. > > > void cpu_ram_add(target_phys_addr_t start, ram_addr_t size); > > We need to support aliased memory regions. For example the ARM RealView > boards > expose the first 256M RAM at both address 0x0 and 0x70000000. It's also > common > for systems to create aliases by ignoring certain address bits. e.g. each sim > slot is allocated a fixed 256M region. Populating that slot with a 128M stick > will cause the contents to be aliased in both the top and bottom halves of > that region. > > Paul