Hi,

> It seems that Page1GSupport is already TRUE in my case, so
> unfortunately, the suggested changes don't help.
> 
> Before commit bbda386d25, PhysMemAddressWidth is 36, after the commit,
> it's 47. I tried with hardcoding different values:
> 45 - My VM boots fine.
> 46 - I run into a "KVM internal error. Suberror: 1" during Linux boot
> (that's also what happens with 47 and 750 MiB of memory).
> 47 - Hangs right away and display is never initialized.

Hmm.  "KVM internal error" sound like this could be a linux kernel bug.

I can't reproduce this, although I'm not testing with kvm but with tcg
because I don't have a machine with 48 phys-bits at hand.

RedHat QE didn't ran into any problems either, although it certainly
could be they didn't test guests with only 512 MB.

> Is there any interest to use a smaller limit than 47 from upstream's
> perspective? Admittedly, it is a rather niche case to use OVMF with so
> little memory.

Well, the problem OVMF has (compared to physical platforms) is that it
needs to scale from tiny (512 MB) to huge (multi-TB).  There are some
heuristics in place to deal with that like the one limiting the address
space used in case there is no support for gigabyte pages.

So, just lowering the limit doesn't look like a good plan.  We can try
tweak the heuristics so OVMF picks better defaults for both huge and
tiny guests.  For that it would be good to figure what the actual root
cause is.  Is it really memory pressure?  With gigabyte pages available
it should not need that much memory for page tables ...

In any case you can run qemu with "-cpu $name,phys-bits=<nr>" to make
the guest address space smaller.

take care,
  Gerd



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#101457): https://edk2.groups.io/g/devel/message/101457
Mute This Topic: https://groups.io/mt/94113631/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Reply via email to