On 4 October 2012 07:38, Avi Kivity <a...@redhat.com> wrote: > On 10/03/2012 10:24 PM, Blue Swirl wrote: >> > >> > #else >> > -void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, >> > - int len, int is_write) >> > + >> > +void address_space_rw(AddressSpace *as, target_phys_addr_t addr, uint8_t >> > *buf, >> > + int len, bool is_write) >> >> I'd make address_space_* use uint64_t instead of target_phys_addr_t >> for the address. It may actually be buggy for 32 bit >> target_phys_addr_t and 64 bit DMA addresses, if such architectures >> exist. Maybe memory.c could be made target independent one day.
I thought target_phys_addr_t was supposed to be "widest necessary address", so an architecture with 64 bit DMA addresses should be implemented with 64 bit target_phys_addr_t... > We can make target_phys_addr_t 64 bit unconditionally. The fraction of > deployments where both host and guest are 32 bits is dropping, and I > doubt the performance drop is noticable. I benchmarked it on ARM when I switched ARM to always 64 bit physaddrs; it was just about measurable (maybe half a percent or so) but not significant (we don't use this type in the fast path for memory accesses, only in the slow/IO path). I'd favour just moving everything to 64 bits (the remaining 32 bit targets are: cris, lm32, m68k, microblaze, or32, sh4, unicore32, xtensa). However it does require some review of devices to check that they're not using target_phys_addr_t when they meant uint32_t [eg in registers for DMA devices] or relying on 32 bit wraparound. -- PMM