On Sun, Oct 30, 2011 at 14:19, Avi Kivity <a...@redhat.com> wrote: > On 10/30/2011 04:12 PM, Anthony Liguori wrote: >> On 10/30/2011 09:02 AM, Avi Kivity wrote: >>> This somewhat controversial patchset converts internal arithmetic in the >>> memory API to 128 bits. >>> >>> It has been argued that with careful coding we can make 64-bit work as >>> well. I don't think this is true in general - a memory router can >>> adjust >>> addresses either forwards or backwards, and some buses (PCIe) need the >>> full 64-bit space - though it's probably the case for all the >>> configurations >>> we support today. Regardless, the need for careful coding means >>> subtle bugs, >>> which I don't want in a core API that is driven by guest supplied >>> values. >> >> The primary need for signed arithmetic is aliases, correct? > >> Where do we actually make use of this in practice? I think having >> negative address spaces is a weird aspect of the memory api and wonder >> if refactoring it away is a better solution tot he problem. >> > > There is no direct use of signed arithmetic in the API (just in the > implementation). Aliases can cause a region to move in either the > positive or negative direction, and this requires either signed > arithmetic or special casing the two directions. > > Signed arithmetic is not the only motivation - overflow is another. > Nothing prevents a user from placing a 64-bit 4k BAR at address > ffff_ffff_ffff_f000; we could move to base/limit representation, but > that will likely cause its own bugs. Finally, we should be able to > represent both a 0-sized region and a 2^64 sized region.
It looks like 64 bit saturating arithmetic could also work. It should also be possible to work only with (start, end) address pairs and never with start + size, then 2^64 shouldn't be an issue.