Hi,
We have reproducible issue with the current HEAD of the stable-4.18 branch
which crashes a network driver domain and on some hardware subsequently
results in a dom0 crash.
`xl info` reports: free_memory : 39961, configuring a guest with
memory = 39800 and starting it gives the log as below.
On Tue, Nov 05, 2024 at 01:57:41PM +0100, Jan Beulich wrote:
> On 05.11.2024 13:43, James Dingwall wrote:
> > Since qemu-xen-4.18.0 the corresponding code which responds to this
> > environment variable was not applied to the qemu tree. It doesn't make
> > sense to me t
mes
commit 86bfb2b8105c840311645a5587bc6cce6e5312ef
Author: James Dingwall
Date: Tue Nov 5 11:16:20 2024 +
libxl: drop setting XEN_QEMU_CONSOLE_LIMIT in the environment (XSA-180 / CVE-2014-3672)
The corresponding code in the Xen qemu repository was not applied from
qemu-xen-4.1
Hi,
We've encountered a problem booting an 'ovmf' hvm instance from a .iso
image when dm_restrict=1. The deprivileged qemu process can't connect
the .iso and in the qemu-dm .log:
qemu-system-i386: failed to create 'qdisk' device '5632': realization of device
xen-cdrom failed: failed xs_open: No
Hi,
We've added a feature to Xen 4.15 such that `xl uptime -b` reports the birth
time of the domain (i.e. a value preserved across migrations). If this would
be of wider interest I can try porting this to a more recent release and
submitting it for review.
Regards,
James
On Tue, Dec 12, 2023 at 10:56:48AM +, Andrew Cooper wrote:
> On 12/12/2023 9:43 am, James Dingwall wrote:
> > Hi,
> >
> > We were experiencing a crash during PV domU boot on several different models
> > of hardware but all with Intel CPUs. The Xen version was based
Hi,
We were experiencing a crash during PV domU boot on several different models
of hardware but all with Intel CPUs. The Xen version was based on stable-4.15
at 4a4daf6bddbe8a741329df5cc8768f7dec664aed (XSA-444) with some local
patches. Since updating the branch to b918c4cdc7ab2c1c9e9a9b54fa9d9
On Mon, Nov 20, 2023 at 10:24:05AM +0100, Roger Pau Monné wrote:
> On Mon, Nov 20, 2023 at 08:27:36AM +0000, James Dingwall wrote:
> > On Fri, Nov 17, 2023 at 10:56:30AM +0100, Jan Beulich wrote:
> > > On 17.11.2023 10:18, James Dingwall wrote:
> > > > On Thu, No
On Fri, Nov 17, 2023 at 11:17:46AM +0100, Roger Pau Monné wrote:
> On Fri, Nov 17, 2023 at 09:18:39AM +0000, James Dingwall wrote:
> > On Thu, Nov 16, 2023 at 04:32:47PM +, Andrew Cooper wrote:
> > > On 16/11/2023 4:15 pm, James Dingwall wrote:
> > > > Hi,
>
On Fri, Nov 17, 2023 at 10:56:30AM +0100, Jan Beulich wrote:
> On 17.11.2023 10:18, James Dingwall wrote:
> > On Thu, Nov 16, 2023 at 04:32:47PM +, Andrew Cooper wrote:
> >> On 16/11/2023 4:15 pm, James Dingwall wrote:
> >>> Hi,
> >>>
> >>>
On Thu, Nov 16, 2023 at 04:32:47PM +, Andrew Cooper wrote:
> On 16/11/2023 4:15 pm, James Dingwall wrote:
> > Hi,
> >
> > Per the msr_relaxed documentation:
> >
> >"If using this option is necessary to fix an issue, please report a bug."
> >
Hi,
Per the msr_relaxed documentation:
"If using this option is necessary to fix an issue, please report a bug."
After recently upgrading an environment from Xen 4.14.5 to Xen 4.15.5 we
started experiencing a BSOD at boot with one of our Windows guests. We found
that enabling `msr_relaxed =
On Tue, Oct 31, 2023 at 10:07:29AM +, James Dingwall wrote:
> Hi,
>
> I'm having a bit of trouble performing live migration between hvm guests. The
> sending side is xen 4.14.5 (qemu 5.0), receiving 4.15.5 (qemu 5.1). The error
> message recorded in qemu-dm---incoming.l
Hi,
I'm having a bit of trouble performing live migration between hvm guests. The
sending side is xen 4.14.5 (qemu 5.0), receiving 4.15.5 (qemu 5.1). The error
message recorded in qemu-dm---incoming.log:
qemu-system-i386: Unknown savevm section or instance ':00:04.0/vga' 0. Make
sure that
On 2022-04-27 10:17, Anthony PERARD wrote:
On Tue, Apr 19, 2022 at 01:04:18PM +0100, James Dingwall wrote:
Thank you for your feedback. I've updated the patch as suggested.
I've also
incorporated two other changes, one is a simple style change for
consistency,
the other is to ch
Hi Anthony,
On Tue, Apr 12, 2022 at 02:03:17PM +0100, Anthony PERARD wrote:
> Hi James,
>
> On Tue, Mar 01, 2022 at 09:35:13AM +0000, James Dingwall wrote:
> > The set_mtu() function of xen-network-common.sh currently has this code:
> >
> > if [ ${type_i
Hi,
The set_mtu() function of xen-network-common.sh currently has this code:
if [ ${type_if} = vif ]
then
local dev_=${dev#vif}
local domid=${dev_%.*}
local devid=${dev_#*.}
local FRONTEND_PATH="/local/domain/$domid/device/vif/$devi
Hi Juergen,
On Fri, Feb 25, 2022 at 03:09:05PM +0100, Juergen Gross wrote:
> On 23.02.22 19:08, James Dingwall wrote:
> > Hi,
> >
> > I have been investigating a very intermittent issue we have with xenstore
> > access hanging. Typically it seems to happen when all do
/drivers/xen/xenfs/super.c
index d7d64235010d..d02c451f6a4d 100644
--- a/drivers/xen/xenfs/super.c
+++ b/drivers/xen/xenfs/super.c
@@ -3,6 +3,11 @@
* xenfs.c - a filesystem for passing info between the a domain and
* the hypervisor.
*
+ * 2022-02-12 James Dingwall Introduce hide_deprecated
Hi,
I've been backporting this series to xen 4.14 and everything relating to the
backend seems to be working well. For the frontend I can see the mtu value
published to xenstore but it does't appear to be consumed to set the matching
mtu in the guest.
https://lists.xenproject.org/archives/html/x
On Mon, Jan 24, 2022 at 10:07:54AM +0100, Roger Pau Monné wrote:
> On Fri, Jan 21, 2022 at 03:05:07PM +0000, James Dingwall wrote:
> > On Fri, Jan 21, 2022 at 03:00:29PM +0100, Roger Pau Monné wrote:
> > > On Fri, Jan 21, 2022 at 01:34:54PM +0000, James Dingwall wrote:
> >
On Fri, Jan 21, 2022 at 03:00:29PM +0100, Roger Pau Monné wrote:
> On Fri, Jan 21, 2022 at 01:34:54PM +0000, James Dingwall wrote:
> > On 2022-01-13 16:11, Roger Pau Monné wrote:
> > > On Thu, Jan 13, 2022 at 11:19:46AM +0000, James Dingwall wrote:
> > > >
> &g
On 2022-01-13 16:11, Roger Pau Monné wrote:
On Thu, Jan 13, 2022 at 11:19:46AM +, James Dingwall wrote:
I have been trying to debug a problem where a vif with the backend in
a
driver domain is added to dom0. Intermittently the hotplug script is
not invoked by libxl (running as xl devd
Hi,
I have been trying to debug a problem where a vif with the backend in a
driver domain is added to dom0. Intermittently the hotplug script is
not invoked by libxl (running as xl devd) in the driver domain. By
enabling some debug for the driver domain kernel and libxl I have these
messages
On Fri, Jan 07, 2022 at 12:39:04PM +0100, Jan Beulich wrote:
> On 06.01.2022 16:08, James Dingwall wrote:
> >>> On Wed, Jul 21, 2021 at 12:59:11PM +0200, Jan Beulich wrote:
> >>>
> &
Hi Jan,
> > On Wed, Jul 21, 2021 at 12:59:11PM +0200, Jan Beulich wrote:
> >
> >> On 21.07.2021 11:29,
Hi Jan,
On Fri, Nov 05, 2021 at 01:50:04PM +0100, Jan Beulich wrote:
> On 26.07.2021 14:33, James Dingwall wrote:
> > Hi Jan,
> >
> > Thank you for taking the time to reply.
> >
> > On Wed, Jul 21, 2021 at 12:59:11PM +0200, Jan Beulich wrote:
> >>
Hi,
This is an issue that was observed on 4.11.3 but I have reproduced on 4.14.3.
After using the `xl save` command the associated `xl create` process exits
which later results in the domain not being cleaned up when the guest is
shutdown.
e.g.:
# xl list -v | grep d13cc54d-dcb8-4337-9dfe-3b04f6
Hi Jan,
Thank you for taking the time to reply.
On Wed, Jul 21, 2021 at 12:59:11PM +0200, Jan Beulich wrote:
> On 21.07.2021 11:29, James Dingwall wrote:
> > We have a system which intermittently starts up and reports an incorrect
> > cpu frequency:
> >
> > # gr
Hi,
We have a system which intermittently starts up and reports an incorrect cpu
frequency:
# grep -i mhz /var/log/kern.log
Jul 14 17:47:47 dom0 kernel: [0.000475] tsc: Detected 2194.846 MHz processor
Jul 14 22:03:37 dom0 kernel: [0.000476] tsc: Detected 2194.878 MHz processor
Jul 14 23
Hi Jan,
On Thu, Feb 04, 2021 at 10:36:06AM +0100, Jan Beulich wrote:
> X86_VENDOR_* aren't bit masks in the older trees.
>
> Reported-by: James Dingwall
> Signed-off-by: Jan Beulich
>
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -226,7 +226,8 @@ int
Hi Jan,
Thank you for your reply.
On Wed, Feb 03, 2021 at 03:55:07PM +0100, Jan Beulich wrote:
> On 01.02.2021 16:26, James Dingwall wrote:
> > I am building the xen 4.11 branch at
> > 310ab79875cb705cc2c7daddff412b5a4899f8c9 which includes commit
> > 3b5de119f0399cbe745
Hi,
I am building the xen 4.11 branch at
310ab79875cb705cc2c7daddff412b5a4899f8c9 which includes commit
3b5de119f0399cbe745502cb6ebd5e6633cc139c "86/msr: fix handling of
MSR_IA32_PERF_{STATUS/CTL}". I think this should address this error
recorded in xen's dmesg:
(XEN) d11v0 VIRIDIAN CRASH: 3
0
> [ 2551.528665] kthread+0x121/0x140
> [ 2551.528667] ? xb_read+0x1d0/0x1d0
> [ 2551.528670] ? kthread_park+0x90/0x90
> [ 2551.528673] ret_from_fork+0x35/0x40
>
> Fix this by doing the cleanup via a workqueue instead.
>
> Reported-by: James Dingwall
> Fixes: fd8aa9095a95c
2551.528654] xenbus_dev_queue_reply+0xc4/0x220
> > [ 2551.528657] xenbus_thread+0x7de/0x880
> > [ 2551.528660] ? wait_woken+0x80/0x80
> > [ 2551.528665] kthread+0x121/0x140
> > [ 2551.528667] ? xb_read+0x1d0/0x1d0
> > [ 2551.528670] ? kthread_park+0x90/0x90
> >
Hi,
I had a bit of a head scratcher while writing a patch for 4.8 which
allows the qemu-dm process for a stubdom to be executed as an
unprivileged user. After a liberal sprinkling of log messages I found
that my problem was related to the check of the return code from
getpwnam_r. In 4.11 the
Hi,
We have 3x HPE DL180 Gen9 servers, one of these is dual CPU the others single.
They are all running the same
xen 4.8.2 build (plus some XSA patches) and Linux 4.1.46 dom0/guest kernel. On
the single CPU systems we can
successfully passthrough ports from the onboard controller (igb) and a
37 matches
Mail list logo