On Fri, 1 Mar 2019 11:08:17 +0100 Auger Eric <eric.au...@redhat.com> wrote:
> Hi Igor, > > On 2/28/19 5:29 PM, Igor Mammedov wrote: > > On Thu, 28 Feb 2019 16:03:24 +0100 > > Eric Auger <eric.au...@redhat.com> wrote: > > > >> Now we have the extended memory map (high IO regions beyond the > >> scalable RAM) and dynamic IPA range support at KVM/ARM level > >> we can bump the legacy 255GB initial RAM limit. The actual maximum > >> RAM size now depends on the physical CPU and host kernel, in > >> accelerated mode. In TCG mode, it depends on the VCPU > >> AA64MMFR0.PARANGE. > >> > >> Signed-off-by: Eric Auger <eric.au...@redhat.com> > >> > >> --- > >> v7 -> v8: > >> - TCG PAMAX check moved in a separate patch > >> > >> v6 -> v7 > >> - handle TCG case > >> - set_memmap modifications moved to previous patches > >> --- > >> hw/arm/virt.c | 21 +-------------------- > >> 1 file changed, 1 insertion(+), 20 deletions(-) > >> > >> diff --git a/hw/arm/virt.c b/hw/arm/virt.c > >> index a3da75a5ae..a45f0fcf79 100644 > >> --- a/hw/arm/virt.c > >> +++ b/hw/arm/virt.c > >> @@ -95,21 +95,8 @@ > >> > >> #define PLATFORM_BUS_NUM_IRQS 64 > >> > >> -/* RAM limit in GB. Since VIRT_MEM starts at the 1GB mark, this means > >> - * RAM can go up to the 256GB mark, leaving 256GB of the physical > >> - * address space unallocated and free for future use between 256G and > >> 512G. > >> - * If we need to provide more RAM to VMs in the future then we need to: > >> - * * allocate a second bank of RAM starting at 2TB and working up > >> - * * fix the DT and ACPI table generation code in QEMU to correctly > >> - * report two split lumps of RAM to the guest > >> - * * fix KVM in the host kernel to allow guests with >40 bit address > >> spaces > >> - * (We don't want to fill all the way up to 512GB with RAM because > >> - * we might want it for non-RAM purposes later. Conversely it seems > >> - * reasonable to assume that anybody configuring a VM with a quarter > >> - * of a terabyte of RAM will be doing it on a host with more than a > >> - * terabyte of physical address space.) > >> - */ > >> #define RAMBASE GiB > >> +/* Legacy RAM limit in GB (< version 4.0) */ > >> #define LEGACY_RAMLIMIT_GB 255 > >> #define LEGACY_RAMLIMIT_BYTES (LEGACY_RAMLIMIT_GB * GiB) > > do we need to keep these couple around? > > > > it's used only in > > [VIRT_MEM] = { RAMBASE, LEGACY_RAMLIMIT_BYTES }, > > and doesn't have any effect whatsoever. > > I'd set initial VIRT_MEM.size to 0 and drop LEGACY_RAMLIMIT_* > > maybe add comment above entry that size is defined by ram_size > > in virt_set_memmap I was checking if (high_io_base < 256 GiB) then > high_io_base = 256GiB. Maybe this 256GiB value comes out of the blue and > I should also replace it with vms->memmap[VIRT_MEM].base + > LEGACY_RAMLIMIT_BYTES. I'd go for it + comment on top of it. > We maintain some kind of compatibility with the old memmap so I prefer > to keep this info somewhere. > > I added a comment though: > /* Actual RAM size depends on initial RAM and device memory options */ > > Thanks > > Eric > > > > > >> > >> @@ -1515,12 +1502,6 @@ static void machvirt_init(MachineState *machine) > >> > >> vms->smp_cpus = smp_cpus; > >> > >> - if (machine->ram_size > vms->memmap[VIRT_MEM].size) { > >> - error_report("mach-virt: cannot model more than %dGB RAM", > >> - LEGACY_RAMLIMIT_GB); > >> - exit(1); > >> - } > >> - > >> if (vms->virt && kvm_enabled()) { > >> error_report("mach-virt: KVM does not support providing " > >> "Virtualization extensions to the guest CPU"); > >