* Gerd Hoffmann (kra...@redhat.com) wrote: > Hi, > > > (a) We could rely in the guest physbits to calculate the PCI64 aperture. > > I'd love to do that. Move the 64-bit I/O window as high as possible and > use -- say -- 25% of the physical address space for it. > > Problem is we can't. > > > failure. Also, if the users are not setting the physbits in the guest, > > there must be a default (seems to be 40bit according to my experiments), > > seems to be a good idea to rely on that. > > Yes, 40 is the default, and it is used *even if the host supports less > than that*. Typical values I've seen for intel hardware are 36 and 39. > 39 is used even by recent hardware (not the xeons, but check out a > laptop or a nuc). > > > If guest physbits is 40, why to have OVMF limiting it to 36, right? > > Things will explode in case OVMF uses more physbits than the host > supports (host physbits limit applies to ept too). In other words: OVMF > can't trust the guest physbits, so it is conservative to be on the safe > side. > > If we can somehow make a *trustable* physbits value available to the > guest, then yes, we can go that route. But the guest physbits we have > today unfortunately don't cut it.
In downstream RH qemu, we run with host-physbits as default; so it's reasonably trustworthy; of course that doesn't help you across a migration between hosts with different sizes (e.g. an E5 Xeon to an E3). Changing upstream to do the same would seem sensible to me, but it's not a foolproof config. Dave > take care, > Gerd > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK