Workaround/ solution:
Commented out the `inet6 auto` stanza for the interface in
/etc/network/interfaces and now it keeps a stable private address as
expected. NetworkManager always seemed to yield to ifup/down (or
perhaps the reverse) in prior versions of Debian, so I hadn't thought
to check ther
Package: network-manager
Version: 1.30.0-2
tags: ipv6
I'm running a Debian system on a LAN segment configured to assign IPv6
addresses via SLAAC. After upgrading from Buster to Bullseye,
NetworkManager starts the system with an IPv6 stable-privacy address as
usual. However, after some time (possib
I've upgraded my VMs to the 10.3 point release and can confirm that
cryptographic services (SSH and others) start quite rapidly now on
system boot.
Thanks, all!
-Michael
Apologies for the late reply. I can certainly test on some of my VMs if
you're willing to provide packages.
Reading over Linus' explanation of deriving jitter from the CPU's cycle
counter, while I'm no cryptographer, I might have some concerns about
the quality of the entropy that will be generate
> The release notes for buster do mention this issue and provide a
> link to:
>
> https://wiki.debian.org/BoottimeEntropyStarvation
>
> which has your Haveged solution as one of its suggestions.
>
D'oh! Serves me right for just skimming the release notes, then. After
doing some in-depth r
Package: linux-image-4.19.0-5-amd64
Version: 4.19.0-5
Issue:
==
After upgrading to Debian Buster, Xen PV guests' entropy pool is too
low to start cryptographic services in a timely manner. This results in
30+ second delays in the startup of services such as SSH. If I connect
to the VM's virtu
I've tested the workaround successfully. Added `pti=off` to my kernel's
boot arguments, updated GRUB, and it started as intended.
Benoît,
Just to be sure, since you're loading your guests' kernels directly
like that, you're passing pti=off via the `extra` config line in your
domU config files, ri
This also apparently affects at least PV guests. Upgrading a PV domU to
kernel 4.9.110-1 and rebooting yields the following output via xl's
console:
Loading Linux 4.9.0-6-amd64 ...
Loading Linux 4.9.0-7-amd64 ...
Loading initial ramdisk ... [ vmlinuz-4.9.0-7-
amd6 2.69MiB 66% 1.67MiB/s ]
[
Package: linux-image-4.9.0-7-amd64
Version: 4.9.110-1
Description:
After installing the latest Stretch kernel, 4.9.110-1, on a server
running Xen Hypervisor 4.8, bootstrapping the kernel fails. GRUB loads
the hypervisor as normal, which then attempts to load the Dom0 kernel.
Once tha
On further investigation, Arne's absolutely right. I upgraded the
kernel back to 4.9.88-1 from Debian Security and installed 'haveged'
(another random number generator). Everything started quickly and
normally after a reboot. Turns out I hadn't noticed this on any of my
other virtual servers becaus
Interesting! I'd also noticed 'random: init done' being piped to console well
after the server had booted, but I didn't mention it because I didn't think it
was related. What you've said makes a lot of sense.
On Sat, 5 May 2018 11:54:54 +0200 Arne Nordmark wrote:
> I have also seen this on a co
Package: linux-image-4.9.0-6-amd64
Version: 4.9.88-1
Issue:
==
Kernel "linux-image-4.9.0-6-amd64," version 4.9.88-1, breaks systemd
startup of RPC, Kerberos KDC services.
Description:
After upgrading to the latest Stretch kernel (4.9.88-1), RPC and KDC
services time out during
Aha! Okay, that certainly explains some things. I didn't realize PVH
was a "use at your own risk" tech preview in Xen 4.8 and kernel 4.9.
Luckily, my infrastructure didn't rely on PVH to start with; I can go
back to conventional PV or HVM with no problem.
Thanks for investigating!
-Michael
Package: linux-image-amd64
Version: 4.9+80+deb9u
Not sure if this needs to go to the Debian Kernel team or Debian Xen
team, so please feel free to reclassify as necessary. I'm leaning
toward this being a kernel bug, as the Xen packages had not changed
when this issue was introduced; only the kerne
Now that qemu-system-x86 1:2.8+dfsg-6+deb9u3 has been released, I have
upgraded and tested.
So far, so good! The CPU and RAM utilization of the `qemu-system-i386`
process seems very normal: ~4% CPU usage and ~9% RAM usage. The HVM
domU is booted and active. If it goes comatose again, I will let
ev
> Does this happen with version 1:2.8+dfsg-6+deb9u1 too?
That's a good question. I did not see that version of the package
available in `apt-cache` when I performed my rollback, so I cannot say.
> Does this happen with qemu-system-x86_64?
The architecture being run *is* 64-bit. As I understand
Package: qemu-system-x86
Version: 1:2.8+dfsg-6+deb9u2
My apologies for mistakenly sending the original ticket in HTML format.
Adding proper header tags here to hopefully make categorization easier.
The original HTML transcript (message part 2) is readable.
Package: qemu-system-x86Version: 1:2.8+dfsg-6+deb9u2
Problem:==Latest Stretch `qemu-system-i386` process consumes the
majority of Xen Dom0's RAM, ultimately crashes DomU.
Symptoms:When starting a Xen HVM guest with qemu-system-x86
version 1:2.8+dfsg-6+deb9u2 installed, the `qemu-system-
18 matches
Mail list logo