Hello,
what version of gcc do Xen developers use for Xen? Is gcc 5.4 or 6.4 safe
to use?
Regards Andreas
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 23.09.2019 10:17, Jan Beulich wrote:
While, according to AMD's processor specs page, the 3700X is just an
8-core chip, I wonder whether
https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg01954.html
still affects this configuration as well. Could you give this a try in
at least the
On 23.09.2019 10:17, Jan Beulich wrote:
Does booting with a single vCPU work?
Number of vCPUs make no difference
Well, according to Steven it does, with viridian=0. Could you
re-check this?
I can confirm that viridian=0 AND vcpus=1 makes the system bootable
(with long delay though)
at lea
While AMD Ryzen 2700X was working perfectly in my tests with Windows 10,
the new 3700X does not even boot a Windows HVM. With viridian=1 you get
BSOD HAL_MEMORY_ALLOCATION and with viridian=0 you get "multiprocessor
config not supported".
xl dmesg says:
(XEN) d1v0 VIRIDIAN CRASH: ac 0 a0a0 fff
On 20.08.2019 20:12, Andrew Cooper wrote:
Xen version 4.10.2. dom0 kernel 4.13.16. The BIOS version is unchanged
from 2700X (working) to 3700X (crashing).
So you've done a Zen v1 => Zen v2 CPU upgrade and an existing system?
With "existing system" you mean the Windows installation? Yes, but it
On 20.08.2019 22:38, Andrew Cooper wrote:
On 20/08/2019 21:36, Andreas Kinzler wrote:
On 20.08.2019 20:12, Andrew Cooper wrote:
Xen version 4.10.2. dom0 kernel 4.13.16. The BIOS version is unchanged
from 2700X (working) to 3700X (crashing).
So you've done a Zen v1 => Zen v2 CPU upgrad
Hello All,
I compared the CPUID listings from Ryzen 2700X (attached as tar.xz) to
3700X and found only very few differences. I added
cpuid = [ "0x8008:ecx=0100" ]
to xl.cfg and then Windows runs great with 16 vCPUs. Cinebench R15 score
is >2050 which is more o
On 15.11.2019 18:13, George Dunlap wrote:
On 11/15/19 5:06 PM, Andreas Kinzler wrote:
Hello All,
I compared the CPUID listings from Ryzen 2700X (attached as tar.xz) to
3700X and found only very few differences. I added
cpuid = [ "0x8008:ecx=0100" ]
On 15.11.2019 12:01, Andreas Kinzler wrote:
On 14.11.2019 12:29, Jan Beulich wrote:
On 14.11.2019 00:10, Andreas Kinzler wrote:
I came across the following: https://lkml.org/lkml/2019/8/29/536
Could that be the reason for the problem mentioned below? Xen is using
HPET as clocksource on the
On 19.11.2019 10:29, Jan Beulich wrote:
On 18.11.2019 20:35, Andreas Kinzler wrote:
On 15.11.2019 12:01, Andreas Kinzler wrote:
On 14.11.2019 12:29, Jan Beulich wrote:
On 14.11.2019 00:10, Andreas Kinzler wrote:
I came across the following: https://lkml.org/lkml/2019/8/29/536
Could that be
On 18.11.2019 17:25, George Dunlap wrote:
Where were these values collected -- on a PV dom0? Or from within the
guest?
Neither. Bare metal kernel - no Xen at all.
Could you try this with `0111` instead?
Works. '1000' crashes again. Now it is clear that 7 is the maximum
Windows accepts.
On 22.11.2019 13:58, Andrew Cooper wrote:
On 22/11/2019 12:57, Jan Beulich wrote:
On 22.11.2019 13:50, Andrew Cooper wrote:
On 22/11/2019 12:46, Jan Beulich wrote:
Linux commit fc5db58539b49351e76f19817ed1102bf7c712d0 says
"Some Coffee Lake platforms have a skewed HPET timer once the SoCs ent
On 25.11.2019 11:15, Jan Beulich wrote:
On 23.11.2019 00:10, Andreas Kinzler wrote:
BTW: Xeon E-2136 @ C242 has 8086:3eca as ID. One needs to check with
Intel which combinations are really affected.
Are you saying you observed the same issue on such a (server processor)
system as well? Neither
On 20.08.2019 22:38, Andrew Cooper wrote:
On 20/08/2019 21:36, Andreas Kinzler wrote:
Is it a known problem? Did someone test the new EPYCs?
This looks familiar, and is still somewhere on my TODO list.
Do you already know the reason or is that still to investigate?
Does booting with a single
Hello all, hello Paul,
On a certain new mainboard with chipset C242 and Intel Xeon E-2136 I
notice a severe clock drift. This is from dom0:
# uptime
20:13:52 up 81 days, 1:41, 1 user, load average: 0.00, 0.00, 0.00
# hwclock
2019-10-12 20:27:37.204966+02:00
# date
Sat Oct 12 20:07:19 CEST
Hello All,
https://www.reddit.com/r/Amd/comments/ckr5f4/amd_ryzen_3000_series_linux_support_and/
is concerning KVM, but it identified that the TOPOEXT feature was
important to getting windows to boot.
I just tried qemu 3.1.1 with KVM (kernel 5.1.21) on a Ryzen 3700X and
started qemu with "-cp
On 06.11.2019 18:50, George Dunlap wrote:
Modern Windows guests (at least Windows 10 and Windows Server 2016)
crash when running under Xen on AMD Ryzen 3xxx desktop-class cpus (but
not the corresponding server cpus).
I my tests the second generation EPYC CPUs (codename "Rome") fail
exactly the
Hello All,
I came across the following: https://lkml.org/lkml/2019/8/29/536
Could that be the reason for the problem mentioned below? Xen is using
HPET as clocksource on the platform/mainboard. Is there an (easy) way to
verify if Xen uses PC10?
Regards Andreas
On 12.10.2019 20:47, Andreas
On 14.11.2019 12:29, Jan Beulich wrote:
On 14.11.2019 00:10, Andreas Kinzler wrote:
I came across the following: https://lkml.org/lkml/2019/8/29/536
Could that be the reason for the problem mentioned below? Xen is using
HPET as clocksource on the platform/mainboard. Is there an (easy) way to
On 15.11.2019 11:57, George Dunlap wrote:
Changeset ca2eee92df44 ("x86, hvm: Expose host core/HT topology to HVM
guests") attempted to "fake up" a topology which would induce guest
operating systems to not treat vcpus as sibling hyperthreads. This
involved (among other things) actually reporting
On 15.11.2019 12:29, George Dunlap wrote:
On 11/15/19 11:17 AM, Andreas Kinzler wrote:
I do not understand a central point: No matter why and/or how a fake
topology is presented by Xen, why did the older generation Ryzen 2xxx
work and Ryzen 3xxx doesn't? What is the change in AMD(!) no
On 15.11.2019 13:10, George Dunlap wrote:
On 11/15/19 11:39 AM, Andreas Kinzler wrote:
On 15.11.2019 12:29, George Dunlap wrote:
On 11/15/19 11:17 AM, Andreas Kinzler wrote:
I do not understand a central point: No matter why and/or how a fake
topology is presented by Xen, why did the older
On Tue, 26 Jun 2018 09:47:11 +0200, Paul Durrant
wrote:
> is not affected at all. The test uses standard iperf3 as a client - >
the passed PCI device is not used in the test - so that
> just the presence of the passed device will cause the iperf3>
performance to drop from 6.5 gbit/sec (no
I am currently researching a transmit queue timeout with Xen 4.8.2 and
Intel X722 (i40e driver). The problem occurs with various linux versions
(4.8.17, 4.13.16, SLES 15 port of i40e). The problem seems to be related
to heavy forwarding/bridging as I am running a heavy network stress test
i
On Fri, 06 Jul 2018 14:03:00 +0200, Jan Beulich wrote:
I am currently researching a transmit queue timeout with Xen 4.8.2 and
Intel X722 (i40e driver). The problem occurs with various linux versions
(4.8.17, 4.13.16, SLES 15 port of i40e). The problem seems to be related
to heavy forwarding/brid
I am currently testing PCI passthrough on the Skylake-SP platform using a
Supermicro X11SPi-TF mainboard. Using PCI passthrough (an LSI SAS HBA)
causes severe performance loss on the Skylake-SP platform while Xeon E3 v5
is not affected at all. The test uses standard iperf3 as a client - the
Hello Roger,
in August 2017, I reported a problem with PCI passthrough and MSI
interrupts
(https://lists.xenproject.org/archives/html/xen-devel/2017-08/msg01433.html).
That report lead to some patches for Xen and qemu.
Some weeks ago I tried a quite new version of Xen 4.10.2-pre
(http://
Hello Roger,
Some weeks ago I tried a quite new version of Xen 4.10.2-pre
(http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=a645331a9f4190e92ccf41a950bc4692f8904239)
and the PCI card (LSI SAS HBA) using Windows 2012 R2 as a guest.
Everything
works but only to the point where Windows reboo
Fill the from_xenstore libxl_device_type hook for PCI devices so that
libxl_retrieve_domain_configuration can properly retrieve PCI devices
from xenstore.
This fixes disappearing pci devices across domain reboots.
This patch seems to be committed now. Please backport this to Xen 4.10
stable bran
29 matches
Mail list logo