Public bug reported:
AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to
always map Cache ass if it was an 4-Core per CCX CPU, which is
incorrect, and costs upwards 30% performance (more realistically 10%) in
L3 Cache Layout aware applications.
Example on a 4-CCX CPU (1950X /w
** Description changed:
AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to
always map Cache ass if it was an 4-Core per CCX CPU, which is
incorrect, and costs upwards 30% performance (more realistically 10%) in
L3 Cache Layout aware applications.
Example on a 4-CC
Still broken with Qemu 4.1rc2 /w Kernel 5.2.
This is a huge problem, as it breaks performance, either in networking
(you can't use the virtio net which is the only 100G adapter afaik), or
you have to disable huge pages, which is a blow to any large vm host, or
it breaks stimer, which increases cpu
Hi,
This seems to have died out. How do we proceed to get this looked into
by the correct people?
Thanks,
Damir
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1811533
Title:
Unstable Win10 guest
What can be done to increase the visibility of this? It's quite annoying
to deal with.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1811533
Title:
Unstable Win10 guest with qemu 3.1 + huge pages +
Also attaching my libvirt log with a few errors at the end of the log.
Thank you for looking into this!
** Attachment added: "Libvirt_log_command_line"
https://bugs.launchpad.net/qemu/+bug/1811533/+attachment/5290071/+files/libvirt_log.txt
--
You received this bug notification because you
Hello,
I took a look today at the layouts when using 1950X (which previously
worked, and yes, admittedly, I am using Windows / coreinfo), and any
basic config (previously something simple as Sockets=1,Cores=8, Theads=1
(now also Dies=1) worked, but now, the topology presents as if all cores
share
Hi Jan,
Problem for me now is why does every config (I can figure out) now
result in SMT on/L3 across all cores which is obviously never true on
Zen except if you have only less than 4 cores, 8 cores should always
result in 2 L3 Caches, and so should 16 Threads /w 8+SMT. This worked in
my initial
Hi,
I've since confirmed that this bug also exist (as expected) on Linux
guests, as well as Zen1 EPYC 7401 CPUs, to make sure this wasn't a
problem with the detection of the newer consumer platform.
Basically it seems (looking at the code with layman eyes) that as long
as you have a topology that
This is the commit I am referencing:
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=8f4202fb1080f86958782b1fca0bf0279f67d136
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1856335
Title:
Cache Layo
10 matches
Mail list logo