On 7/12/22 10:06, Igor Mammedov wrote: > On Mon, 11 Jul 2022 21:03:28 +0100 > Joao Martins <joao.m.mart...@oracle.com> wrote: > >> On 7/11/22 16:31, Joao Martins wrote: >>> On 7/11/22 15:52, Joao Martins wrote: >>>> On 7/11/22 13:56, Igor Mammedov wrote: >>>>> On Fri, 1 Jul 2022 17:10:13 +0100 >>>>> Joao Martins <joao.m.mart...@oracle.com> wrote: >>>>> >>>>>> diff --git a/hw/i386/pc.c b/hw/i386/pc.c >>>>>> index a79fa1b6beeb..07025b510540 100644 >>>>>> --- a/hw/i386/pc.c >>>>>> +++ b/hw/i386/pc.c >>>>>> @@ -907,6 +907,87 @@ static uint64_t pc_get_cxl_range_end(PCMachineState >>>>>> *pcms) >>>>>> return start; >>>>>> } >>>>>> >>>>>> +static hwaddr pc_max_used_gpa(PCMachineState *pcms, >>>>>> + hwaddr above_4g_mem_start, >>>>>> + uint64_t pci_hole64_size) >>>>>> +{ >>>>>> + X86MachineState *x86ms = X86_MACHINE(pcms); >>>>>> + >>>>> >>>>>> + if (!x86ms->above_4g_mem_size) { >>>>>> + /* >>>>>> + * 32-bit pci hole goes from >>>>>> + * end-of-low-ram (@below_4g_mem_size) to IOAPIC. >>>>>> + */ >>>>>> + return IO_APIC_DEFAULT_ADDRESS - 1; >>>>>> + } >>>>> this hunk still bothers me (nothing changed wrt v5 issues around it) >>>>> issues recap: ( >>>>> 1. correctness of it >>>>> 2. being limited to AMD only, while it seems pretty generic to me >>>>> 3. should be a separate patch >>>>> ) >>>>> >>>> How about I instead delete this hunk, and only call >>>> pc_set_amd_above_4g_mem_start() >>>> when there's @above_4g_mem_size ? Like in pc_memory_init() I would instead: >>>> >>>> if (IS_AMD_CPU(&cpu->env) && x86ms->above_4g_mem_size) { >>>> hwaddr start = x86ms->above_4g_mem_start; >>>> >>>> if (pc_max_used_gpa(pcms, start, pci_hole64_size) >= AMD_HT_START) { >>>> pc_set_amd_above_4g_mem_start(pcms, pci_hole64_size); >>>> } >>>> ... >>>> } >>>> >>>> Given that otherwise it is impossible to ever encounter the 1T boundary. >>>> >>> >>> And while at it I would also remove any unneeded arguments from >>> pc_max_used_gpa(), >>> which would turn the function into this: >>> >>> +static hwaddr pc_max_used_gpa(uint64_t pci_hole64_size) >>> +{ >>> + return pc_pci_hole64_start() + pci_hole64_size; >>> +} >>> >>> I would nuke the added helper if it wasn't for having 2 call sites in this >>> patch. >>> >> >> Full patch diff further below -- after removing pc_max_used_gpa() and made >> further >> cleanups given this code can be much simpler after using this approach. >> >>>> If not ... what other alternative would address your concern? >>>> >> >> diff --git a/hw/i386/pc.c b/hw/i386/pc.c >> index e178bbc4129c..1ded3faeffe0 100644 >> --- a/hw/i386/pc.c >> +++ b/hw/i386/pc.c >> @@ -882,6 +882,62 @@ static uint64_t pc_get_cxl_range_end(PCMachineState >> *pcms) >> return start; >> } >> >> +/* >> + * AMD systems with an IOMMU have an additional hole close to the >> + * 1Tb, which are special GPAs that cannot be DMA mapped. Depending >> + * on kernel version, VFIO may or may not let you DMA map those ranges. >> + * Starting Linux v5.4 we validate it, and can't create guests on AMD >> machines >> + * with certain memory sizes. It's also wrong to use those IOVA ranges >> + * in detriment of leading to IOMMU INVALID_DEVICE_REQUEST or worse. >> + * The ranges reserved for Hyper-Transport are: >> + * >> + * FD_0000_0000h - FF_FFFF_FFFFh >> + * >> + * The ranges represent the following: >> + * >> + * Base Address Top Address Use >> + * >> + * FD_0000_0000h FD_F7FF_FFFFh Reserved interrupt address space >> + * FD_F800_0000h FD_F8FF_FFFFh Interrupt/EOI IntCtl >> + * FD_F900_0000h FD_F90F_FFFFh Legacy PIC IACK >> + * FD_F910_0000h FD_F91F_FFFFh System Management >> + * FD_F920_0000h FD_FAFF_FFFFh Reserved Page Tables >> + * FD_FB00_0000h FD_FBFF_FFFFh Address Translation >> + * FD_FC00_0000h FD_FDFF_FFFFh I/O Space >> + * FD_FE00_0000h FD_FFFF_FFFFh Configuration >> + * FE_0000_0000h FE_1FFF_FFFFh Extended Configuration/Device Messages >> + * FE_2000_0000h FF_FFFF_FFFFh Reserved >> + * >> + * See AMD IOMMU spec, section 2.1.2 "IOMMU Logical Topology", >> + * Table 3: Special Address Controls (GPA) for more information. >> + */ >> +#define AMD_HT_START 0xfd00000000UL >> +#define AMD_HT_END 0xffffffffffUL >> +#define AMD_ABOVE_1TB_START (AMD_HT_END + 1) >> +#define AMD_HT_SIZE (AMD_ABOVE_1TB_START - AMD_HT_START) >> + >> +static void pc_set_amd_above_4g_mem_start(PCMachineState *pcms, >> + hwaddr maxusedaddr) >> +{ >> + X86MachineState *x86ms = X86_MACHINE(pcms); >> + hwaddr maxphysaddr; >> + >> + /* >> + * Relocating ram-above-4G requires more than TCG_PHYS_ADDR_BITS (40). >> + * So make sure phys-bits is required to be appropriately sized in order >> + * to proceed with the above-4g-region relocation and thus boot. >> + */ >> + maxphysaddr = ((hwaddr)1 << X86_CPU(first_cpu)->phys_bits) - 1; >> + if (maxphysaddr < maxusedaddr) { >> + error_report("Address space limit 0x%"PRIx64" < 0x%"PRIx64 >> + " phys-bits too low (%u) cannot avoid AMD HT range", >> + maxphysaddr, maxusedaddr, >> X86_CPU(first_cpu)->phys_bits); >> + exit(EXIT_FAILURE); >> + } >> + >> + x86ms->above_4g_mem_start = AMD_ABOVE_1TB_START; >> +} >> + >> void pc_memory_init(PCMachineState *pcms, >> MemoryRegion *system_memory, >> MemoryRegion *rom_memory, >> @@ -897,6 +953,7 @@ void pc_memory_init(PCMachineState *pcms, >> PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms); >> X86MachineState *x86ms = X86_MACHINE(pcms); >> hwaddr cxl_base, cxl_resv_end = 0; >> + X86CPU *cpu = X86_CPU(first_cpu); >> >> assert(machine->ram_size == x86ms->below_4g_mem_size + >> x86ms->above_4g_mem_size); >> @@ -904,6 +961,29 @@ void pc_memory_init(PCMachineState *pcms, >> linux_boot = (machine->kernel_filename != NULL); >> >> /* >> + * The HyperTransport range close to the 1T boundary is unique to AMD >> + * hosts with IOMMUs enabled. Restrict the ram-above-4g relocation >> + * to above 1T to AMD vCPUs only. >> + */ >> + if (IS_AMD_CPU(&cpu->env) && x86ms->above_4g_mem_size) { > > it has the same issue as pc_max_used_gpa(), i.e. > x86ms->above_4g_mem_size != 0 > doesn't mean that there isn't any memory above 4Gb nor that there aren't > any MMIO (sgx/cxl/pci64hole), that's was the reason we were are considering > max_used_gpa > Argh yes, you are right. I see it now.
> I'd prefer to keep pc_max_used_gpa(), > idea but make it work for above cases and be more generic (i.e. not to be > tied to AMD only) since 'pc_max_used_gpa() < physbits' Are you also indirectly suggesting here that the check inside pc_set_amd_above_4g_mem_start() should be moved into pc_memory_init() given that it's orthogonal to this issue. ISTR that you suggested this at some point. If so, then there's probably very little reason to keep pc_set_amd_above_4g_mem_start() around. > applies to equally > to AMD and Intel (and to trip it, one just have to configure small enough > physbits or large enough hotpluggable RAM/CXL/PCI64HOLE) > I can reproduce the issue you're thinking with basic memory hotplug. Let me see what I can come up in pc_max_used_gpa() to cover this one. I'll respond here with a proposal. I would really love to have v7.1.0 with this issue fixed but I am not very confident it is going to make it :( Meanwhile, let me know if you have thoughts on this one: https://lore.kernel.org/qemu-devel/1b2fa957-74f6-b5a9-3fc1-65c5d6830...@oracle.com/ I am going to assume that if no comments on the above that I'll keep things as is. And also, whether I can retain your ack with Bernhard's suggestion here: https://lore.kernel.org/qemu-devel/0eefb382-4ac6-4335-ca61-035babb95...@oracle.com/ >> + hwaddr maxusedaddr = pc_pci_hole64_start() + pci_hole64_size; >> + >> + /* Bail out if max possible address does not cross HT range */ >> + if (maxusedaddr >= AMD_HT_START) { >> + pc_set_amd_above_4g_mem_start(pcms, maxusedaddr); >> + } >> + >> + /* >> + * Advertise the HT region if address space covers the reserved >> + * region or if we relocate. >> + */ >> + if (x86ms->above_4g_mem_start == AMD_ABOVE_1TB_START || >> + cpu->phys_bits >= 40) { >> + e820_add_entry(AMD_HT_START, AMD_HT_SIZE, E820_RESERVED); >> + } >> + } >> + >> + /* >> * Split single memory region and use aliases to address portions of it, >> * done for backwards compatibility with older qemus. >> */ >> >