[Expired for QEMU because there has been no activity for 60 days.]
** Changed in: qemu
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1856335
Title:
Cache Layou
The QEMU project is currently considering to move its bug tracking to
another system. For this we need to know which bugs are still valid
and which could be closed already. Thus we are setting older bugs to
"Incomplete" now.
If you still think this bug report here is valid, then please switch
the
Sanjay,
You can just increase the number of vcpus, such as:
64
then continue to define the vcpus:
(6x enabled=yes, then 2x enabled=no.
h-sieger,
I did some testing with geekbench 5:
baseline multicore score = 12733
https://browser.geekbench.com/v5/cpu/3069626
score with option = 12775
https://browser.geekbench.com/v5/cpu/3069415
best score with your xml above = 16960
https://browser.geekbench.com/v5/cpu/3066003
I'm running a
@sanjaybmd
I'm glad to read that it worked for you. In fact, since I posted the XML
I didn't have the time to do benchmarking, now my motherboard is dead
and I have to wait for repair/replacement.
Do you have any data to quantify the performance gain?
As to the number of cores, you will notice t
h-sieger,
Your XML gave me very significant performance gains.
Is there any way to do this with more than 24 assigned cores?
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1856335
Title:
Cache Layou
Yep, I read the Reddit thread, had no idea this was possible.
Still, both solutions are ugly workarounds and it would be nice to fix
this properly. But at least I don't have to patch and compile QEMU on my
own anymore.
--
You received this bug notification because you are a member of qemu-
devel
@Jan: this coreinfo output looks good.
I finally managed to get the core /cache alignment right, I believe:
32
The problem is caused by the fact that with Ryzen CPUs with disabled
cores, the APIC IDs are not sequential on host - in order for cache
topology to be configured properly, there is a 'hole' in APIC ID and
core ID numbering (I have added full output of cpuid for my 3900X).
Unfortunately, adding hol
Thanks for clarifying, Jan.
In the meantime I tried a number of so-called solutions published on
Reddit and other places, none of which seems to work.
So if I understand it correctly, there is currently no solution to the
incorrect l3 cache layout for Zen architecture CPUs. At best a
workaround f
Thanks for clarifying, Jan.
In the meantime I tried a number of so-called solutions published on
Reddit and other places, none of which seems to work.
So if I understand it correctly, there is currently no solution to the
incorrect l3 cache layout for Zen architecture CPUs. At best a
workaround f
h-sieger,
that is a misunderstanding, read my comment carefully again:
"A workaround for Linux VMs is to disable CPUs (and setting their
number/pinnings accordingly, e.g. every 4th (and 3rd for 3100) core is going to
be 'dummy' and disabled system-wide) by e.g. echo 0 >
/sys/devices/system/cpu/c
This is the CPU cache layout as shown by lscpu -a -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINEMAXMHZMINMHZ
00 00 0:0:0:0 yes 3800. 2200.
10 01 1:1:1:0 yes 3800. 2200.
20 02 2:2:2:0 yes 3800. 2200.00
Jan, I tried your suggestion but it didn't make a difference. Here is my
current setup:
h/w: AMD Ryzen 9 3900X
kernel: 5.4
QEMU: 5.0.0-6
Chipset selection: Q35-5.0
Configuration: host-passthrough, cache enabled
Use CoreInfo.exe inside Windows. The problem is this:
Logical Processor to Cache Map
Thanks Jan. I had some new hardware/software issues combined with the
QEMU 5.0.. issues that had my Windows VM crash after some minutes.
I totally overlooked the following:
So I guess you posted to answer to this:
https://www.reddit.com/r/VFIO/comments/erwzrg/think_i_found_a_workaround_
adds "host-cache-info=on,l3-cache=off"
to the qemu -cpu args
I believe l3-cache=off is useless with host-cache-info=on
So should do what you want.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1
With regard to Jan's comment earlier and the virsh capabilities listing
the cores and siblings, also note the following lines from virsh
capabilities for a 3900X CPU:
virsh capabilities is perfectly able to identify the L3 cache structure
and associate the ri
Sieger, I am not an expert on XML. So, I dont know. Qemu probably cannot
handle disabled cores. I am still trying to learn more about this
problem.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/185633
"The CPUs with cpuset="0,12" are disabled once booted. The host-cache-
info=on is the part that makes sure that the cache config is passed to
the VM (but unfortunately does not take disabled cores into account,
which results in incorrect config). The qemu:commandline is added
because I need to add
Damir:
Hm, must be some misconfiguration, then. My config for Linux VMs to utilize 3
out of the 4 CCXs. Important parts of the libvirt domain XML:
24
1
No, creating artificial NUMA nodes is, simply put, never a good solution
for CPUs that operate as a single NUMA node - which is the case for all
Zen2 CPUs (except maybe EPYCs? not sure about those).
You may workaround the L3 issue that way, but hit many new bugs/problems
by introducing multiple NU
Latest qemu has removed all the hard coded configurations for AMD. It is
leaving everything to customize. One way is to configure is using numa
nodes. This will make sure cpus under one numa node share same L3. Then
pin the correct host cpus to guest cpus using vcpupin. I would change
this -numa n
Hi Jan,
Problem for me now is why does every config (I can figure out) now
result in SMT on/L3 across all cores which is obviously never true on
Zen except if you have only less than 4 cores, 8 cores should always
result in 2 L3 Caches, and so should 16 Threads /w 8+SMT. This worked in
my initial
A workaround for Linux VMs is to disable CPUs (and setting their
number/pinnings accordingly, e.g. every 4th (and 3rd for 3100) core is
going to be 'dummy' and disabled system-wide) by e.g. echo 0 >
/sys/devices/system/cpu/cpu3/online
No good workaround for Windows VMs exists, as far as I know - t
The problem is that disabled cores are not taken into account.. ALL Zen2
CPUs have L3 cache group per CCX and every CCX has 4 cores, the problem
is that some cores in each CCX (1 for 6 and 12-core CPUs, 2 for 3100)
are disabled for some models, but they still use their core ids (as can
be seen in v
Same problem here on 5.0 and 3900x (3 cores per CCX). And as stated
before - declaring NUMA nodes is definitely not the right solution if
the aim is to emulate the host CPU as close as possible.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed
I upgraded to QEMU emulator version 5.0.50
Using q35-5.1 (the latest) and the following libvirt configuration:
50331648
50331648
24
hvm
/usr/sha
Hello,
I took a look today at the layouts when using 1950X (which previously
worked, and yes, admittedly, I am using Windows / coreinfo), and any
basic config (previously something simple as Sockets=1,Cores=8, Theads=1
(now also Dies=1) worked, but now, the topology presents as if all cores
share
Yes. Sieger. Please install 5.0 it should work fine. I am not sure about
4.2.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1856335
Title:
Cache Layout wrong on many Zen Arch CPUs
Status in QEMU:
Hello Babu,
Thanks for the reply and the QEMU command line. I will try to implement
it in the XML.
So essentially what you do is to define each group of cpus and associate
them with a numa node:
-numa node,nodeid=0,cpus=0-5 -numa node,nodeid=1,cpus=6-11 -numa
node,nodeid=2,cpus=12-17 -numa node,
Hi Seiger,
I am not an expert on libvirt. I mostly use qemu command line for my test. I
was able to achieve the 3960X configuration with the following command line.
# qemu-system-x86_64 -name rhel7 -m 16384 -smp
24,cores=12,threads=2,sockets=1 -hda vdisk.qcow2 -enable-kvm -net nic
-net bridge,b
Here the vm.log with the qemu command line (shortened):
2020-05-03 18:23:38.674+: starting up libvirt version: 5.10.0, qemu
version: 5.0.50v5.0.0-154-g2ef486e76d-dirty, kernel: 5.4.36-1-MANJARO
-machine
pc-q35-4.2,accel=kvm,usb=off,vmport=off,dump-guest-core=off,kernel_irqchip=on,pflash0=lib
Finally installed QEMU 5.0.0.154 - still the same. QEMU doesn't
recognize the L3 caches and still lists 3 L3 caches instead of 4 with 3
cores/6 threads.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1
It could be an issue of how the kernel presents the CPU topology.
Hardware: AMD Ryzen 3900X 12 core 24 threads (SMT)
Host: Kernel 5.6.6, QEMU 4.2
virsh capabilities | grep "cpu id"
Damir, Example of how to use numa configuration.
-smp 16,maxcpus=16,cores=16,threads=1,sockets=1 -numa node,nodeid=0,cpus=0-7
-numa node,nodeid=1,cpus=8-15
This will help to put all the cores in correct L3 boundary. I strongly
suggest to use the latest qemu release.
--
You received this bug not
AMD does not use dies. For AMD dies is normally set to 1. You probably
have to pass dies in some other ways. Did you try the latest qemu v 5.0?
Please try it.
Qemu expects the user to configure the topology based on their requirement.
Try replacing
with
You can also use the numa configuratio
Same problem for Ryzen 9 3900X. There should be 4x L3 caches, but there
are only 3.
Same results with "host-passthrough" and "EPYC-IBPB". Windows doesn't
recognize the correct L3 cache layout.
>From coreinfo.exe:
Logical Processor to Cache Map:
**-- Data Cache 0, Le
Damir,
We normally test Linux guests here. Can you please give me exact qemu command
line. Even the SMP parameters(sockets,cores,threads,dies) will also work. I
will try to recreate it locally first.
Give me example what works and what does not work.
I have recently sent few more patches to fi
This is the commit I am referencing:
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=8f4202fb1080f86958782b1fca0bf0279f67d136
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1856335
Title:
Cache Layo
Hi,
I've since confirmed that this bug also exist (as expected) on Linux
guests, as well as Zen1 EPYC 7401 CPUs, to make sure this wasn't a
problem with the detection of the newer consumer platform.
Basically it seems (looking at the code with layman eyes) that as long
as you have a topology that
** Description changed:
AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to
always map Cache ass if it was an 4-Core per CCX CPU, which is
incorrect, and costs upwards 30% performance (more realistically 10%) in
L3 Cache Layout aware applications.
Example on a 4-CC
41 matches
Mail list logo