Hi Xiaoyao! Hi Zhao! Xiaoyao, I tested the patch you provided, it works smoothly, easy to apply. Nothing to complain about.
Zhao, I also tried your approach (extend cpu_address_space_init with AddressSpace parameter) First, it crashed in malloc with error: malloc(): unaligned tcache chunk detected After a little investigation I resized cpu->cpu_ases array, so it can fit second element and it started working. However, it looks like that function cpu_address_space_destroy needs some adjustment, because now it treats cpu->cpu_ases elements as dynamically allocated and destroys them with g_free() and passing &smram_address_space to cpu_address_space_init() in register_smram_listener() could lead to a problem since it is statically allocated in binary. So, my question now, what should I do? Do I need to send Xiaoyao patch as new separate patch for review? Or continue polishing Zhao approach? Any comments from Paolo? > On 4 Jul 2025, at 16:50, Xiaoyao Li <xiaoyao...@intel.com> wrote: > > On 7/4/2025 4:20 PM, Zhao Liu wrote: >>> yes, QEMU supports separate address space for SMM mode with KVM. It's just >>> that QEMU doesn't connect it with the CPU address space. >> Yes, you're right. >> (But initially, it might have been intentional, as KVM's SMM address >> space is global. It is consistent with the current KVM/memory interface. >> Creating a separate CPUAddressSpace for each CPU would involve a lot of >> redundant work.) > > I think Paolo can answer why didn't associate SMM support with KVM with CPU > addresspace, since Paolo enabled the KVM smm support in QEMU. I guess maybe > it's just overlooked. > >>> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h >>> index a68485547d50..7d6f4a86d802 100644 >>> --- a/include/exec/cpu-common.h >>> +++ b/include/exec/cpu-common.h >>> @@ -130,6 +130,8 @@ void cpu_address_space_init(CPUState *cpu, int asidx, >>> */ >>> void cpu_address_space_destroy(CPUState *cpu, int asidx); >>> >>> +void cpu_address_space_add(CPUState *cpu, AddressSpace *as); >>> + >> Instead of introducing a "redundant" interfaces, it's better to lift the >> restrictions of cpu_address_space_init() on KVM and reuse it. Moreover, >> not explicitly setting asidx can be risky. > > I thought about it. I just wanted to provide a poc implementation for Kirill > to try, so I didn't touch the existing interface by purpose. > > Meanwhile, I also had the idea of just calling existing > cpu_address_space_init() instead of adjusting it, but it needs to be called > for SMRAM as well to cover SMM. This way, it would still create a (when > counting the smram, then two) redundant address space for each CPU. But it is > how it behaves today that with KVM, each CPU has a separate address space for > system memory. > > And I'm still investigating if switching to reuse the existing address space > instead of creating a new one in cpu_address_space_init() would cause > incompatible problem or not. And this is also the reason why I just provided > a draft POC diff instead of formal patch. > >> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h >> index a68485547d50..e3c70ccb1ea0 100644 >> --- a/include/exec/cpu-common.h >> +++ b/include/exec/cpu-common.h >> @@ -106,6 +106,8 @@ size_t qemu_ram_pagesize_largest(void); >> * @asidx: integer index of this address space >> * @prefix: prefix to be used as name of address space >> * @mr: the root memory region of address space >> + * @as: the pre-created AddressSpace. If have, no need to >> + * specify @mr. >> * >> * Add the specified address space to the CPU's cpu_ases list. >> * The address space added with @asidx 0 is the one used for the >> @@ -117,10 +119,10 @@ size_t qemu_ram_pagesize_largest(void); >> * cpu->num_ases to the total number of address spaces it needs >> * to support. >> * >> - * Note that with KVM only one address space is supported. >> */ >> void cpu_address_space_init(CPUState *cpu, int asidx, >> - const char *prefix, MemoryRegion *mr); >> + const char *prefix, MemoryRegion *mr, >> + AddressSpace *as); >> /** >> * cpu_address_space_destroy: >> * @cpu: CPU for which address space needs to be destroyed >> diff --git a/system/physmem.c b/system/physmem.c >> index ff0ca40222d3..15aedfb58055 100644 >> --- a/system/physmem.c >> +++ b/system/physmem.c >> @@ -776,16 +776,23 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu, >> #endif /* CONFIG_TCG */ >> void cpu_address_space_init(CPUState *cpu, int asidx, >> - const char *prefix, MemoryRegion *mr) >> + const char *prefix, MemoryRegion *mr, >> + AddressSpace *as) >> { >> CPUAddressSpace *newas; >> - AddressSpace *as = g_new0(AddressSpace, 1); >> - char *as_name; >> + AddressSpace *cpu_as; >> - assert(mr); >> - as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index); >> - address_space_init(as, mr, as_name); >> - g_free(as_name); >> + if (!as) { >> + cpu_as = g_new0(AddressSpace, 1); >> + char *as_name; >> + >> + assert(mr); >> + as_name = g_strdup_printf("%s-%d", prefix, cpu->cpu_index); >> + address_space_init(cpu_as, mr, as_name); >> + g_free(as_name); >> + } else { >> + cpu_as = as; >> + } >> /* Target code should have set num_ases before calling us */ >> assert(asidx < cpu->num_ases); >> if (asidx == 0) { >> /* address space 0 gets the convenience alias */ >> - cpu->as = as; >> + cpu->as = cpu_as; >> } >> - /* KVM cannot currently support multiple address spaces. */ >> - assert(asidx == 0 || !kvm_enabled()); >> - >> if (!cpu->cpu_ases) { >> cpu->cpu_ases = g_new0(CPUAddressSpace, cpu->num_ases); >> cpu->cpu_ases_count = cpu->num_ases; >> @@ -805,12 +809,12 @@ void cpu_address_space_init(CPUState *cpu, int asidx, >> newas = &cpu->cpu_ases[asidx]; >> newas->cpu = cpu; >> - newas->as = as; >> + newas->as = cpu_as; >> if (tcg_enabled()) { >> newas->tcg_as_listener.log_global_after_sync = >> tcg_log_global_after_sync; >> newas->tcg_as_listener.commit = tcg_commit; >> newas->tcg_as_listener.name = "tcg"; >> - memory_listener_register(&newas->tcg_as_listener, as); >> + memory_listener_register(&newas->tcg_as_listener, cpu_as); >> } >> } >> --- >> In this interface, whether to reuse the existing address space (@as >> argument) or create a new one for the CPU depends on the maintainer's >> final opinion anyway. If the choice is to reuse, as in the code above, >> need to adjust other calling cases. If so, I suggest Kirill handle this >> in a separate patch. >>> #include "kvm_i386.h" >>> @@ -90,6 +91,12 @@ static bool kvm_cpu_realizefn(CPUState *cs, Error **errp) >>> kvm_set_guest_phys_bits(cs); >>> } >>> >>> + for (int i = 0; i < kvm_state->nr_as; i++) { >>> + if (kvm_state->as[i].as) { >>> + cpu_address_space_add(cs, kvm_state->as[i].as); >>> + } >>> + } >>> + >> This will add smram twice, with the following cpu_address_space_add(). > > No, SMRAM is initialize at machine init done notifier, which is after this. > >> Why are all KVM as unconditionally added to each CPU? >> Another issue is the SMM AS index would be "unknown"... >>> return true; >>> } >>> >>> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c >>> index 234878c613f6..3ba7b26e5a74 100644 >>> --- a/target/i386/kvm/kvm.c >>> +++ b/target/i386/kvm/kvm.c >>> @@ -2700,6 +2700,7 @@ static MemoryRegion smram_as_mem; >>> >>> static void register_smram_listener(Notifier *n, void *unused) >>> { >>> + CPUState *cpu; >>> MemoryRegion *smram = >>> (MemoryRegion *) object_resolve_path("/machine/smram", NULL); >>> >>> @@ -2724,6 +2725,9 @@ static void register_smram_listener(Notifier *n, void >>> *unused) >>> address_space_init(&smram_address_space, &smram_as_root, "KVM-SMRAM"); >>> kvm_memory_listener_register(kvm_state, &smram_listener, >>> &smram_address_space, 1, "kvm-smram"); >>> + CPU_FOREACH(cpu) { >>> + cpu_address_space_add(cpu, &smram_address_space); >>> + } >>> } >> With the cpu_address_space_init(), here could be: >> CPU_FOREACH(cpu) { >> /* Ensure SMM Address Space has the index 1. */ >> assert(cpu->num_ases == 1); >> cpu->num_ases = 2; >> cpu_address_space_init(cpu, 1, NULL, NULL, &smram_address_space); >> }