Hello, Marty
During the VM deploy on XenServer/xcp-ng, the property
|cores-per-socket| is retrieved directly from the hypervisor's
templates. During the creation of the VM, ACS fetches the template from
Xen's API, using the OS configured on the VM's template as the
paremeter. These templates have many configurations that are the
hypervisor's defaults, between them, there is the |cores-per-socket|. On
my xcp-ng environment, the Windows VMs have the |cores-per-socket:2| value.
To check this, you may list all the templates using the |xe
template-list| command:
|[15:17 xcp-ng-wfokjhs ~]# xe template-list ... uuid ( RO) :
7774689b-4ca1-4dea-8545-dddd6b64c17f name-label ( RW): Windows 10
(64-bit) name-description ( RW): Clones of this template will
automatically provision their storage when first booted and then
reconfigure themselves with the optimal settings for Windows 10
(64-bit). ... |
Then, you may list the template parameters using the |xe
template-param-list uuid=<template_uuid>|, then, in the platform
attribute, you will see a map of the parameters used for the template,
including the |cores-per-socket|, if it is set:
|[15:17 xcp-ng-wfokjhs ~]# xe template-param-list
uuid=7774689b-4ca1-4dea-8545-dddd6b64c17f uuid ( RO) :
7774689b-4ca1-4dea-8545-dddd6b64c17f name-label ( RW): Windows 10
(64-bit) ... platform (MRW): videoram: 8; hpet: true; acpi_laptop_slate:
1; secureboot: auto; viridian_apic_assist: true; apic: true; device_id:
0002; cores-per-socket: 2; viridian_crash_ctl: true; pae: true; vga:
std; nx: true; viridian_time_ref_count: true; viridian_stimer: true;
viridian: true; acpi: 1; viridian_reference_tsc: true allowed-operations
(SRO): changing_NVRAM; changing_dynamic_range; changing_shadow_memory;
changing_static_range; provision; export; clone; copy ... |
There are two ways to overcome this, you may change the Xen template
configurations directly, using the |xe template-param-remove
uuid=<template_uuid> param-name=platform param-key=cores-per-socket| for
example, to remove the parameter; I believe that you can use
|template-param-set| to change the param's value as well, but I have not
tested it. The second way is to stop the VM and edit the |platform|
setting, changing the value as you wish and then starting the VM.
Beware that changing the CPU topology may affect the guest OS and should
be done with caution.
Best regards,
João Jandre
On 12/4/25 10:27, Marty Godsey wrote:
100% Nux.
I tried adding this as an additional field to the offering, but it was not
honored.
Plus depending on the OS, the other sockets may not even be seen. As an
example, Windows 10/11, I know this is a non-relistic example, but it gets the
point across. If I have a VM with 16 vCPUs assigned and it spins up with 1
socket with16 cores, I am fine. But if it spins up the VM with 4 sockets, each
with 4 cores, two of those sockets will not be seen by the OS because it only
supports 2 sockets.
From: Nux<[email protected]>
Date: Thursday, December 4, 2025 at 4:18 AM
To:[email protected] <[email protected]>
Cc: Wei ZHOU<[email protected]>
Subject: Re: Sockets to Core Ratio
WARNING: This email originated from outside of the organization. Do not click
links or open attachments unless you recognize the sender and know the content
is safe.
It'd be nice to be able to set this in the compute offering as it can
impact licensing of various proprietary software.
I know, it's stupid, but people pay more if they have more sockets, as
opposed to cores.
There could be other considerations as well.
Regards
On 2025-12-04 07:23, Wei ZHOU wrote:
Hi,
You can stop the vm, add a vm setting cpu.corespersocket=4, and then
start
the vm.
Kind regards,
Wei
On Thu, Dec 4, 2025 at 7:56 AM Marty Godsey<[email protected]> wrote:
Hello,
When Cloudstack creates a VM on my XCP cluster, it always adds more
sockets than I would like to have. So, for example, if I have a
compute
offering that is 4 cores, It's 2 sockets, 2 cores. Why can't it be 1
socket
and 4 cores? If this is something that is not being don’t by
Cloudstack and
XCP, I will go that route. Any one seen this?
Thank you everyone.