On 23/02/2017 16:39, Alex Williamson wrote: > On Thu, 23 Feb 2017 16:21:47 +0100 > Paolo Bonzini <pbonz...@redhat.com> wrote: > >> On 23/02/2017 15:35, Peter Maydell wrote: >>> On 23 February 2017 at 12:53, Paolo Bonzini <pbonz...@redhat.com> wrote: >>>> >>>> >>>> On 23/02/2017 13:26, Peter Maydell wrote: >>>>> On 23 February 2017 at 11:43, Paolo Bonzini <pbonz...@redhat.com> wrote: >>>>>> On 23/02/2017 12:34, Peter Maydell wrote: >>>>>>> We should probably update the doc comment to note that the >>>>>>> pointer is to host-endianness memory (and that this is not >>>>>>> like normal RAM which is target-endian)... >>>>>> >>>>>> I wouldn't call it host-endianness memory, and I disagree that normal >>>>>> RAM is target-endian---in both cases it's just a bunch of bytes. >>>>>> >>>>>> However, the access done by the MemoryRegionOps callbacks needs to match >>>>>> the endianness declared by the MemoryRegionOps themselves. >>>>> >>>>> Well, if the guest stores a bunch of integers to the memory, which >>>>> way round do you see them when you look at the bunch of bytes? >>>> >>>> You see them in whatever endianness the guest used. >>> >>> I'm confused. I said "normal RAM and this ramdevice memory are >>> different", and you seem to be saying they're the same. I don't >>> think they are (in particular I think with a BE guest on an >>> LE host they'll look different). >> >> No, they look entirely the same. The only difference is that they go >> through MemoryRegionOps instead of memcpy. > > Is this true for vfio use case? If we use memcpy we're talking directly > to the device with no endian conversions. If we use read/write then > there is an endian conversion in the host kernel.
But ramd MemoryRegionOps do not use file read/write, they use memory read/write, so they talk directly to the device. Paolo