The default CPU-to-NUMA association is given by mc->get_default_cpu_node_id() when it isn't provided explicitly. However, the CPU topology isn't fully considered in the default association and it causes CPU topology broken warnings on booting Linux guest.
For example, the following warning messages are observed when the Linux guest is booted with the following command lines. /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \ -accel kvm -machine virt,gic-version=host \ -cpu host \ -smp 6,sockets=2,cores=3,threads=1 \ -m 1024M,slots=16,maxmem=64G \ -object memory-backend-ram,id=mem0,size=128M \ -object memory-backend-ram,id=mem1,size=128M \ -object memory-backend-ram,id=mem2,size=128M \ -object memory-backend-ram,id=mem3,size=128M \ -object memory-backend-ram,id=mem4,size=128M \ -object memory-backend-ram,id=mem4,size=384M \ -numa node,nodeid=0,memdev=mem0 \ -numa node,nodeid=1,memdev=mem1 \ -numa node,nodeid=2,memdev=mem2 \ -numa node,nodeid=3,memdev=mem3 \ -numa node,nodeid=4,memdev=mem4 \ -numa node,nodeid=5,memdev=mem5 : alternatives: patching kernel code BUG: arch topology borken the CLS domain not a subset of the MC domain <the above error log repeats> BUG: arch topology borken the DIE domain not a subset of the NODE domain With current implementation of mc->get_default_cpu_node_id(), CPU#0 to CPU#5 are associated with NODE#0 to NODE#5 separately. That's incorrect because CPU#0/1/2 should be associated with same NUMA node because they're seated in same socket. This fixes the issue by populating the CPU topology in virt_possible_cpu_arch_ids() and considering the socket index when default CPU-to-NUMA association is given in virt_possible_cpu_arch_ids(). With this applied, no more CPU topology broken warnings are seen from the Linux guest. The 6 CPUs are associated with NODE#0/1, but there are no CPUs associated with NODE#2/3/4/5. Signed-off-by: Gavin Shan <gs...@redhat.com> --- hw/arm/virt.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/hw/arm/virt.c b/hw/arm/virt.c index 46bf7ceddf..dee02b60fc 100644 --- a/hw/arm/virt.c +++ b/hw/arm/virt.c @@ -2488,7 +2488,9 @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index) static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx) { - return idx % ms->numa_state->num_nodes; + int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id; + + return socket_id % ms->numa_state->num_nodes; } static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms) @@ -2496,6 +2498,7 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms) int n; unsigned int max_cpus = ms->smp.max_cpus; VirtMachineState *vms = VIRT_MACHINE(ms); + MachineClass *mc = MACHINE_GET_CLASS(vms); if (ms->possible_cpus) { assert(ms->possible_cpus->len == max_cpus); @@ -2509,6 +2512,18 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms) ms->possible_cpus->cpus[n].type = ms->cpu_type; ms->possible_cpus->cpus[n].arch_id = virt_cpu_mp_affinity(vms, n); + + ms->possible_cpus->cpus[n].props.has_socket_id = true; + ms->possible_cpus->cpus[n].props.socket_id = + n / (ms->smp.dies * ms->smp.clusters * + ms->smp.cores * ms->smp.threads); + if (mc->smp_props.dies_supported) { + ms->possible_cpus->cpus[n].props.has_die_id = true; + ms->possible_cpus->cpus[n].props.die_id = + n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads); + } + ms->possible_cpus->cpus[n].props.has_core_id = true; + ms->possible_cpus->cpus[n].props.core_id = n / ms->smp.threads; ms->possible_cpus->cpus[n].props.has_thread_id = true; ms->possible_cpus->cpus[n].props.thread_id = n; } -- 2.23.0