On Tue, Nov 15, 2022 at 05:53:13PM -0000, [email protected] wrote:
> Hi Klaas,
> 
> in our case, the Hypervisor CPU looks like this:
> 
> CPU(s):                32
> On-line CPU(s) list:   0-31
> Thread(s) per core:    2
> Core(s) per socket:    8
> Socket(s):             2
> NUMA node(s):          2
> NUMA node0 CPU(s):     0-7,16-23
> NUMA node1 CPU(s):     8-15,24-31
> 
> The PROD VM should have 24 PINNED (due to licensing requirements) vCPUs out 
> of the 32 threads and around 838G of the 1 TB RAM.
> The TEST VM should have 8 vCPUs out of the 32 threads and around 128G of 1 TB 
> RAM.
> 
> With these PROD VM Requirements, it does not make sense to limit vCPUs to one 
> socket / NUMA node. The TEST VM can be limited to one socket.

In this case the option here would be to actually define two vNUMA in
your PROD and TEST VMs. Give your PROD VM 12+12 and your TEST VM 4+4.
Then make sure to define the pinning so that vCPUs of one vNUMA align
with CPUs of physical NUMA. Also sharing the CPU that VDSM is using with
the TEST VM might be preferable unless you want to share some CPUs
between TEST and PROD. So like this:

TEST: pCPUs 0,1,16,17 map to first vNUMA
      pCPUs 8,9,24,25 map to second vNUMA

PROD: pCPUs 2-7,18-23 map to first vNUMA 
      pCPUS 10-15,26-31 map to second vNUMA

> 
> I guess I have made these mistakes:
> 
> - I should not use Physical CPU thread 0 but leave it for Hypervisor
> - I have configured the VM "2 threads per core" with <topology sockets='16' 
> cores='6' threads='2'/>,

This does not add up. 16 sockets is bit too much. I think you meant
2 sockets.

> but have not specified two threads for each vCPU in the pinning field.

I don't think this is strictly necessary. But you definitely want to
make sure that the virtual threads match the physical threads. In other
words you only want to mix and match pCPUs 0 and 16 (or 1 and 17, etc.) so
I would say 0#0_1#16 is fine just like 0#0,16_1#0,16.

Alternatively you may define only single threaded VM (topology 2:12:1)
and don't care about the physical threads that much.

Which of the alternatives is best would depend on your workload in the
guest and how well it can benefit from the knowledge of the host
topology.


Hope this helps

    Tomas

> In the oVirt "Resource Allocation" Tab in "CPU Pinning Topology", i have 
> specified:
> 0#8_1#9_2#10_3#11_4#12_5#13_6#14_7#15_8#16_9#17_10#18_11#19_12#20_13#21_14#22_15#23_16#24_17#25_18#26_19#27_20#28_21#29_22#30_23#31
> 
> The Calculate CPU Pinning script from the RHV HANA Guide results in this 
> mapping:
> 
> 0#1,17
> 1#1,17
> 2#2,18
> 3#2,18
> 4#3,19
> 5#3,19
> 6#4,20
> 7#4,20
> 8#5,21
> 9#5,21
> 10#6,22
> 11#6,22
> 12#9,25
> 13#9,25
> 14#10,26
> 15#10,26
> 16#11,27
> 17#11,27
> 18#12,28
> 19#12,28
> 20#13,29
> 21#13,29
> 22#14,30
> 23#14,30
> 
> But with this approach, I can not separate CPUs between PROD VM and TEST VM.
> 
> Any ideas?
> _______________________________________________
> Users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/[email protected]/message/PF7UCDGZPL3FJ4O5XBICJNSJKOM2ALBE/
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/6BYXDUJ7ZHWP742RMJNY3XBD4T3SONDP/

Reply via email to