On 3/25/22 9:19 PM, Igor Mammedov wrote:
On Wed, 23 Mar 2022 15:24:35 +0800
Gavin Shan <gs...@redhat.com> wrote:
Currently, the SMP configuration isn't considered when the CPU
topology is populated. In this case, it's impossible to provide
the default CPU-to-NUMA mapping or association based on the socket
ID of the given CPU.
This takes account of SMP configuration when the CPU topology
is populated. The die ID for the given CPU isn't assigned since
it's not supported on arm/virt machine yet. Besides, the cluster
ID for the given CPU is assigned because it has been supported
on arm/virt machine.
Signed-off-by: Gavin Shan <gs...@redhat.com>
---
hw/arm/virt.c | 11 +++++++++++
qapi/machine.json | 6 ++++--
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index d2e5ecd234..064eac42f7 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2505,6 +2505,7 @@ static const CPUArchIdList
*virt_possible_cpu_arch_ids(MachineState *ms)
int n;
unsigned int max_cpus = ms->smp.max_cpus;
VirtMachineState *vms = VIRT_MACHINE(ms);
+ MachineClass *mc = MACHINE_GET_CLASS(vms);
if (ms->possible_cpus) {
assert(ms->possible_cpus->len == max_cpus);
@@ -2518,6 +2519,16 @@ static const CPUArchIdList
*virt_possible_cpu_arch_ids(MachineState *ms)
ms->possible_cpus->cpus[n].type = ms->cpu_type;
ms->possible_cpus->cpus[n].arch_id =
virt_cpu_mp_affinity(vms, n);
+
+ assert(!mc->smp_props.dies_supported);
+ ms->possible_cpus->cpus[n].props.has_socket_id = true;
+ ms->possible_cpus->cpus[n].props.socket_id =
+ n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
+ ms->possible_cpus->cpus[n].props.has_cluster_id = true;
+ ms->possible_cpus->cpus[n].props.cluster_id =
+ n / (ms->smp.cores * ms->smp.threads);
are there any relation cluster values here and number of clusters with
what virt_cpu_mp_affinity() calculates?
They're different clusters. The cluster returned by virt_cpu_mp_affinity()
is reflected to MPIDR_EL1 system register, which is mainly used by VGIC2/3
interrupt controller to send send group interrupts to the CPU cluster. It's
notable that the value returned from virt_cpu_mp_affinity() is always
overrided by KVM. It means this value is only used by TCG for the emulated
GIC2/GIC3.
The cluster in 'ms->possible_cpus' is passed to ACPI PPTT table to populate
the CPU topology.
+ ms->possible_cpus->cpus[n].props.has_core_id = true;
+ ms->possible_cpus->cpus[n].props.core_id = n / ms->smp.threads;
ms->possible_cpus->cpus[n].props.has_thread_id = true;
ms->possible_cpus->cpus[n].props.thread_id = n;
of cause target has the right to decide how to allocate IDs, and mgmt
is supposed to query these IDs before using them.
But:
* IDs within 'props' are supposed to be arch defined.
(on x86 IDs in range [0-smp.foo_id), on ppc it something different)
Question is what real hardware does here in ARM case (i.e.
how .../cores/threads are described on bare-metal)?
On ARM64 bare-metal machine, the core/cluster ID assignment is pretty arbitrary.
I checked the CPU topology on my bare-metal machine, which has following SMP
configurations.
# lscpu
:
Thread(s) per core: 4
Core(s) per socket: 28
Socket(s): 2
smp.sockets = 2
smp.clusters = 1
smp.cores = 56 (28 per socket)
smp.threads = 4
// CPU0-111 belongs to socket0 or package0
// CPU112-223 belongs to socket1 or package1
# cat /sys/devices/system/cpu/cpu0/topology/package_cpus
00000000,00000000,00000000,0000ffff,ffffffff,ffffffff,ffffffff
# cat /sys/devices/system/cpu/cpu111/topology/package_cpus
00000000,00000000,00000000,0000ffff,ffffffff,ffffffff,ffffffff
# cat /sys/devices/system/cpu/cpu112/topology/package_cpus
ffffffff,ffffffff,ffffffff,ffff0000,00000000,00000000,00000000
# cat /sys/devices/system/cpu/cpu223/topology/package_cpus
ffffffff,ffffffff,ffffffff,ffff0000,00000000,00000000,00000000
// core/cluster ID spans from 0 to 27 on socket0
# for i in `seq 0 27`; do cat
/sys/devices/system/cpu/cpu$i/topology/core_id; done
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
# for i in `seq 28 55`; do cat
/sys/devices/system/cpu/cpu$i/topology/core_id; done
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
# for i in `seq 0 27`; do cat
/sys/devices/system/cpu/cpu$i/topology/cluster_id; done
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
# for i in `seq 28 55`; do cat
/sys/devices/system/cpu/cpu$i/topology/cluster_id; done
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
// However, core/cluster ID starts from 256 on socket1
# for i in `seq 112 139`; do cat
/sys/devices/system/cpu/cpu$i/topology/core_id; done
256 257 258 259 260 261 262 263 264 265 266 267 268 269
270 271 272 273 274 275 276 277 278 279 280 281 282 283
# for i in `seq 140 167`; do cat
/sys/devices/system/cpu/cpu$i/topology/core_id; done
256 257 258 259 260 261 262 263 264 265 266 267 268 269
270 271 272 273 274 275 276 277 278 279 280 281 282 283
# for i in `seq 112 139`; do cat
/sys/devices/system/cpu/cpu$i/topology/cluster_id; done
256 257 258 259 260 261 262 263 264 265 266 267 268 269
270 271 272 273 274 275 276 277 278 279 280 281 282 283
# for i in `seq 140 167`; do cat
/sys/devices/system/cpu/cpu$i/topology/cluster_id; done
256 257 258 259 260 261 262 263 264 265 266 267 268 269
270 271 272 273 274 275 276 277 278 279 280 281 282 283