[Kernel-packages] [Bug 1822118] Re: Kernel Panic while rebooting cloud instance
** Also affects: systemd (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux-azure in Ubuntu. https://bugs.launchpad.net/bugs/1822118 Title: Kernel Panic while rebooting cloud instance Status in linux-azure package in Ubuntu: In Progress Status in systemd package in Ubuntu: New Bug description: Description: In the event a particular Azure cloud instance is rebooted it's possible that it may never recover and the instance will break indefinitely. In My case, it was a kernel panic. See specifics below.. Series: Disco Instance Size: Basic_A3 Region: (Default) US-WEST-2 Kernel Version: 4.18.0-1013-azure #13-Ubuntu SMP Thu Feb 28 22:54:16 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux I had a simple script to reboot an instance (X) amount of times, I chose 50, so the machine would power cycle by issuing a "reboot" from the terminal prompt just as a user would. Once the machine came up, it captured dmesg and other bits then rebooted again until it reached 50. After the 4th attempt, my script timed out, I took a look at the instance console log and the following displayed on the console. [ OK ] Reached target Reboot. /shutdown: error while loading shared libra[ 89.498980] Kernel panic - not syncing: Attempted to kill init! exitcode=0x7f00 [ 89.498980] [ 89.500042] CPU: 0 PID: 1 Comm: shutdown Not tainted 4.18.0-1013-azure #13-Ubuntu [ 89.508026] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090007 06/02/2017 [ 89.508026] Call Trace: [ 89.508026] dump_stack+0x63/0x8a [ 89.508026] panic+0xe7/0x247 [ 89.508026] do_exit.cold.23+0x26/0x75 [ 89.508026] do_group_exit+0x43/0xb0 [ 89.508026] __x64_sys_exit_group+0x18/0x20 [ 89.508026] do_syscall_64+0x5a/0x110 [ 89.508026] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 89.508026] RIP: 0033:0x7f7bf0154d86 [ 89.508026] Code: Bad RIP value. [ 89.508026] RSP: 002b:7ffd6be693b8 EFLAGS: 0206 ORIG_RAX: 00e7 [ 89.508026] RAX: ffda RBX: 7f7bf015e420 RCX: 7f7bf0154d86 [ 89.508026] RDX: 007f RSI: 003c RDI: 007f [ 89.508026] RBP: 7f7bef9449c0 R08: 00e7 R09: [ 89.508026] R10: 7ffd6be6974c R11: 0206 R12: 0018 [ 89.508026] R13: 7f7bef944ac8 R14: 7f7bef944a00 R15: [ 89.508026] Kernel Offset: 0x1600 from 0x8100 (relocation range: 0x8000-0xbfff) [ 89.508026] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x7f00 [ 89.508026] ]--- this only occurred once in my testing. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1822118/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1831453] Re: [Hyper-V] Install issue for Ubuntu 19.04
** Package changed: linux (Ubuntu) => subiquity (Ubuntu) ** Changed in: subiquity (Ubuntu) Status: Incomplete => New -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1831453 Title: [Hyper-V] Install issue for Ubuntu 19.04 Status in subiquity package in Ubuntu: New Bug description: Hi, While trying to install Ubuntu 19.04 on Hyper-V (using this image http://releases.ubuntu.com/19.04/ubuntu-19.04-live-server-amd64.iso), I found out that the installation hanged and went to terminal. No data was written to the VHDx file. Tried multiple variations: - Windows Server 2019 and Windows Server 2016 - Gen 1 and Gen 2 VM - 1/2/4 vCPUS - Static Memory/Dynamic Memory - w/o NIC attached - Live boot / install Kernel version is 5.0.0-13-generic. I'm attaching dmesg output. ProblemType: Bug DistroRelease: Ubuntu 19.04 Package: linux-image-5.0.0-13-generic (not installed) ProcVersionSignature: Ubuntu 5.0.0-13.14-generic 5.0.6 Uname: Linux 5.0.0-13-generic x86_64 AlsaDevices: total 0 crw-rw+ 1 root audio 116, 1 Jun 3 11:39 seq crw-rw+ 1 root audio 116, 33 Jun 3 11:39 timer AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay' ApportVersion: 2.20.10-0ubuntu27 Architecture: amd64 ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 'arecord' AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: CRDA: N/A CasperVersion: 1.405 Date: Mon Jun 3 12:33:51 2019 IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig': 'iwconfig' LiveMediaBuild: Ubuntu-Server 19.04 "Disco Dingo" - Release amd64 (20190416.2) Lspci: Lsusb: Error: command ['lsusb'] failed with exit code 1: MachineType: Microsoft Corporation Virtual Machine PciMultimedia: ProcEnviron: TERM=xterm PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=C.UTF-8 SHELL=/bin/bash ProcFB: 0 hyperv_fb ProcKernelCmdLine: BOOT_IMAGE=/casper/vmlinuz boot=casper quiet --- RelatedPackageVersions: linux-restricted-modules-5.0.0-13-generic N/A linux-backports-modules-5.0.0-13-generic N/A linux-firmwareN/A RfKill: Error: [Errno 2] No such file or directory: 'rfkill': 'rfkill' SourcePackage: linux UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 11/26/2012 dmi.bios.vendor: Microsoft Corporation dmi.bios.version: Hyper-V UEFI Release v1.0 dmi.board.asset.tag: None dmi.board.name: Virtual Machine dmi.board.vendor: Microsoft Corporation dmi.board.version: Hyper-V UEFI Release v1.0 dmi.chassis.asset.tag: 4159-4754-0782-4271-8692-8073-78 dmi.chassis.type: 3 dmi.chassis.vendor: Microsoft Corporation dmi.chassis.version: Hyper-V UEFI Release v1.0 dmi.modalias: dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev1.0:bd11/26/2012:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev1.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev1.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev1.0: dmi.product.family: Virtual Machine dmi.product.name: Virtual Machine dmi.product.sku: None dmi.product.version: Hyper-V UEFI Release v1.0 dmi.sys.vendor: Microsoft Corporation To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/subiquity/+bug/1831453/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1826378] Re: linux autopkg tests failin in bionic and cosmic -proposed
** Changed in: linux (Ubuntu) Status: Incomplete => Confirmed -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1826378 Title: linux autopkg tests failin in bionic and cosmic -proposed Status in linux package in Ubuntu: Confirmed Bug description: The linux autopkg tests block evaluation of the binutils, gcc-7 and gcc-8 updates in bionic-proposed and cosmic-proposed.. There doesn't seem to be any progress, and pings on #ubuntu-release don't get attention. While these packages should probably stay in -proposed for longer than a week, the failing autopkg tests don't allow clear britney test runs. Pretty please address this issue, by either stating that these test results can be ignored, or please fix them. For the future, it would be nice, if this kind of ping could be avoided. See http://people.canonical.com/~ubuntu-archive/proposed-migration/bionic/update_excuses.html http://people.canonical.com/~ubuntu-archive/proposed-migration/cosmic/update_excuses.html To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1826378/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1746806] Re: sssd appears to crash AWS c5 and m5 instances, cause 100% CPU
** Changed in: cloud-images Status: In Progress => Fix Released -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1746806 Title: sssd appears to crash AWS c5 and m5 instances, cause 100% CPU Status in cloud-images: Fix Released Status in linux package in Ubuntu: Fix Released Status in linux-aws package in Ubuntu: Fix Released Status in linux source package in Xenial: Fix Released Status in linux-aws source package in Xenial: Fix Released Bug description: After upgrading to the Ubuntu EC2 AMI from 20180126 (specifically ami-79873901 in us-west-2) we have seen sssd hard locking c5 and m5 EC2 instances after starting the service and CPU goes to 100%. We do not experience this issue with t2 or c4 instance types and we do not see this issue on any instance types using Ubuntu Cloud images from 20180109 or before. I have verified that this is kernel related as I booted an image that we created using the Ubuntu cloud image from 20180109 which works fine on a c5. I then did a "apt update && apt install --only-upgrade linux-aws && systemctl disable sssd", rebooted the server, verified I was on the new kernel and started sssd with "systemctl start sssd" and the EC2 instance froze and Cloudwatch CPU usage for that instance went to 100%. I haven't been able to find much in the syslog, kern.log, journalctl logs, etc. The only thing I have been able to find is that when this happens I tend to see "^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@" in the syslog and sssd log files. I have attached several log files and the output of a "apport-bug /usr/sbin/sssd". Let me know if you need anything else to help track this down. Thanks, Paul To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/1746806/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1537923] Re: Add pvpanic driver to Ubuntu 15.10 virtual kernel
Brad - fix is verified. ** Tags removed: verification-needed-wily ** Tags added: verification-done-wily -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1537923 Title: Add pvpanic driver to Ubuntu 15.10 virtual kernel Status in linux package in Ubuntu: Fix Released Status in linux source package in Wily: Fix Committed Status in linux source package in Xenial: Fix Released Bug description: GCE's Ubuntu 15.10 images lack the pvpanic driver. The driver is useful for detecting guest panics, and we would like to see it added to the Ubuntu 15.10 virtual kernel. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1537923/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1457168] Re: [Hyper-V] kvp, vrss, fcopy daemons no longer in init
** Changed in: linux-lts-utopic (Ubuntu) Status: Confirmed => Fix Released ** Changed in: linux-lts-vivid (Ubuntu) Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux-lts-utopic in Ubuntu. https://bugs.launchpad.net/bugs/1457168 Title: [Hyper-V] kvp, vrss, fcopy daemons no longer in init Status in linux-lts-utopic package in Ubuntu: Fix Released Status in linux-lts-vivid package in Ubuntu: Fix Released Bug description: hv-fcopy-daemon.conf hv-kvp-daemon.conf and hv-vss-daemon.conf are missing from /etc/init and therefore these critical daemons are no longer launched on Hyper-V installations that use linux-virtual-lts- utopic or other instances of the HWE kernel. Because Azure now uses the HWE kernel, in the generic 12.04 and 14.04 images I see the file /etc/init/hv-kvp-daemon.conf, but executing this fails to start the service (because negotiation of these services is done in the first minute after an image is started) On the 12.04 image I can install the proper linux-tools-`uname -r` and linux-cloud- tools-`uname -r`, but it seems that perhaps the init script is not pointing to the correct binary. Normally /usr/sbin/hv_kvp_daemon is a wrapper to find /usr/sbin/hv_kvp_daemon-`uname -r`, but that latter file is never installed. On Ubuntu 12.04 I do see that "/usr/lib/linux-lts-trusty- tools-3.13.0-51/hv_kvp_daemon" is installed and works, but I'm not sure yet what's missing to get the upstart service working. Please correct the init scripts for the HWE kernels on 12.04 and 14.04. ProblemType: Bug DistroRelease: Ubuntu 14.04 Package: linux-virtual-lts-utopic 3.16.0.37.29 ProcVersionSignature: Ubuntu 3.16.0-37.51~14.04.1-generic 3.16.7-ckt9 Uname: Linux 3.16.0-37-generic x86_64 ApportVersion: 2.14.1-0ubuntu3.10 Architecture: amd64 Date: Wed May 20 11:00:57 2015 EcryptfsInUse: Yes InstallationDate: Installed on 2015-04-09 (40 days ago) InstallationMedia: Ubuntu-Server 14.04.2 LTS "Trusty Tahr" - Release amd64 (20150218.1) ProcEnviron: TERM=linux PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: linux-meta-lts-utopic UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux-lts-utopic/+bug/1457168/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1551419] Re: Fix UUID endianness patch breaks cloud-init on Azure
** Changed in: linux (Ubuntu) Assignee: Dan Watkins (daniel-thewatkins) => (unassigned) -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1551419 Title: Fix UUID endianness patch breaks cloud-init on Azure Status in cloud-init package in Ubuntu: In Progress Status in linux package in Ubuntu: Confirmed Bug description: On Azure, cloud-init relies on the system-uuid as based by SMBIOS a unique ID for a cloud instance. If this ID ever changes, then cloud- init will attempt to reprovision the VM. This recent kernel patch in the Ubuntu kernel corrects the endianness for some SMBIOS fields, but also has the effect causing cloud-init to think that the system-uuid has changed: http://kernel.ubuntu.com/git/ubuntu/ubuntu- trusty.git/commit/drivers/firmware?id=3ec24c55be6c543797ba3ee9a227a5631aef607e The impact is that cloud-init attempts to reprovision the VM, often causing the customer to lose access to their VM. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1551419/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1537923] [NEW] Add pvpanic driver to Ubuntu 15.10 virtual kernel
Public bug reported: GCE's Ubuntu 15.10 images lack the pvpanic driver. The driver is useful for detecting guest panics, and we would like to see it added to the Ubuntu 15.10 virtual kernel. ** Affects: linux (Ubuntu) Importance: Undecided Status: Incomplete -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1537923 Title: Add pvpanic driver to Ubuntu 15.10 virtual kernel Status in linux package in Ubuntu: Incomplete Bug description: GCE's Ubuntu 15.10 images lack the pvpanic driver. The driver is useful for detecting guest panics, and we would like to see it added to the Ubuntu 15.10 virtual kernel. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1537923/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1415634] Re: RFC: replace linux-virtual with linux-server / tune kernel packages
No log file needed, Mr Brad Fig automated script. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1415634 Title: RFC: replace linux-virtual with linux-server / tune kernel packages Status in linux package in Ubuntu: Confirmed Bug description: We currently build 2 meta packages for easy consumption and automatic kernel upgrades. These are linux-virtual and linux-generic. The -virtual meta package was created with the general purpose of "support virtual environments", and then extended to include "and common cloud workloads" (adding things like rdb and kvm). Would it be possible to find a happy medium where we had enough drivers to enable network and block devices and essential server devices but maintained a reasonable install size ? That would enable us to create one set of images for use in cloud enviroments be they on bare metal or hypervisors. As an example of sizes collected from a cloud image, we have: release | kernel| apt inst| initrd | /lib/modules vivid | linux-virtual | 125M| 8M| 34M vivid | linux-generic | 358M| 27M| 193M trusty | linux-virtual | 119M| 7M| 31M trusty | linux-generic | 337M| 24M| 184M 'apt inst' is as reported by apt install after 'apt-get --purge ^linux-.*' vivid version 3.18.0.11, trusty version 3.13.0.44.51. both packages share the linux kernel binary which in this case is 5.6M on trusty and 6.3M on vivid. For a cloud image with default install in the 700M range, the difference between -virtual and -generic is considerable. So it clearly has its value. However, this value comes at the cost of specialization. If a user installs a server via MAAS or via ISO, they get linux-generic and have some set of modules/kernel function. If they run a cloud image, they have a different set. This is less than desireable as we'd like to say that both cases are "Ubuntu Server". It also means that we have to build "maas images" which are primarily "cloud images with a hardware kernel". To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1415634/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1415634] Re: RFC: replace linux-virtual with linux-server / tune kernel packages
** Changed in: linux (Ubuntu) Status: Incomplete => New -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1415634 Title: RFC: replace linux-virtual with linux-server / tune kernel packages Status in linux package in Ubuntu: Confirmed Bug description: We currently build 2 meta packages for easy consumption and automatic kernel upgrades. These are linux-virtual and linux-generic. The -virtual meta package was created with the general purpose of "support virtual environments", and then extended to include "and common cloud workloads" (adding things like rdb and kvm). Would it be possible to find a happy medium where we had enough drivers to enable network and block devices and essential server devices but maintained a reasonable install size ? That would enable us to create one set of images for use in cloud enviroments be they on bare metal or hypervisors. As an example of sizes collected from a cloud image, we have: release | kernel| apt inst| initrd | /lib/modules vivid | linux-virtual | 125M| 8M| 34M vivid | linux-generic | 358M| 27M| 193M trusty | linux-virtual | 119M| 7M| 31M trusty | linux-generic | 337M| 24M| 184M 'apt inst' is as reported by apt install after 'apt-get --purge ^linux-.*' vivid version 3.18.0.11, trusty version 3.13.0.44.51. both packages share the linux kernel binary which in this case is 5.6M on trusty and 6.3M on vivid. For a cloud image with default install in the 700M range, the difference between -virtual and -generic is considerable. So it clearly has its value. However, this value comes at the cost of specialization. If a user installs a server via MAAS or via ISO, they get linux-generic and have some set of modules/kernel function. If they run a cloud image, they have a different set. This is less than desireable as we'd like to say that both cases are "Ubuntu Server". It also means that we have to build "maas images" which are primarily "cloud images with a hardware kernel". To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1415634/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1609945] [NEW] update linux-hwe-virtual-trusty meta package to point to hwe-x
Public bug reported: Please update the linux-hwe-virtual-trusty and linux-hwe-generic-trusty meta packages to point to hwe-x so that those that use the linux-hwe- virtual-trusty to get the most recent HWE kernel will get hwe-x. ** Affects: linux (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1609945 Title: update linux-hwe-virtual-trusty meta package to point to hwe-x Status in linux package in Ubuntu: New Bug description: Please update the linux-hwe-virtual-trusty and linux-hwe-generic- trusty meta packages to point to hwe-x so that those that use the linux-hwe-virtual-trusty to get the most recent HWE kernel will get hwe-x. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1609945/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1626436] Re: [4.8 regression] boot has become very slow
I marked the "affects me too" button at the top, but just wanted to add that we're seeing slow boots on EC2 also. With the latest yakkety AMI on ec2 (ami-1bc5820c, us-east-1, m4.large) first and subsequent boots are are much slower than expected. Top 5 in the blame: $ systemd-analyze blame 2min 6.384s cloud-init.service 14.945s cloud-init-local.service 5.595s keyboard-setup.service 4.079s dev-xvda1.device 2.587s cloud-config.service -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1626436 Title: [4.8 regression] boot has become very slow Status in linux package in Ubuntu: Triaged Bug description: With yakkety's recent update from linux 4.4 to 4.8 booting has become a lot slower. It's not one service in particular, but without "quiet" and "splash" you can now easily read every single line instead of that whole wall of text zipping by. It now takes over 20s instead of ~10 seconds to boot. This is even more dramatic when factoring out the recent boot hang of NetworkManager (bug 1622893) and disabling lightdm: sudo systemctl mask NetworkManager NetworkManager-wait-online lightdm then booting with 4.4 takes 1.5s and with 4.8 19.5s (!). Some excerps from systemd-analyze blame: 4.4: 474ms postfix@-.service 395ms lxd-containers.service 305ms networking.service 4.8: 4.578s postfix@-.service 7.300s lxd-containers.service 6.285s networking.service I attach the full outputs of critical-chain and analyze for 4.4 and 4.8 for reference. This is much less noticeable in the running system. There is no immediate feeling of sluggishness (although my system is by and large idle). I compared the time of sbuilding colord under similar circumstances (-j4, building on tmpfs, thus no hard disk delays; running with fully pre-loaded apt-cacher-ng thus no random network delays), and with 4.4 it takes 6.5 minutes and with 4.8 it takes 7.5. So that got a bit slower, but much less dramatically than during boot, so this is either happening when a lot of processes run in parallel, or is perhaps related to setting up cgroups. One thing I noticed that during sbuild in 4.8 "top" shows ridiculous loads (~ 250) under 4.8, while it's around 4 or 5 under 4.8. But that doesn't reflect in actual sluggishness, so this might be just an unrelated bug. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: linux-image-4.8.0-11-generic 4.8.0-11.12 ProcVersionSignature: Ubuntu 4.8.0-11.12-generic 4.8.0-rc6 Uname: Linux 4.8.0-11-generic x86_64 ApportVersion: 2.20.3-0ubuntu7 Architecture: amd64 AudioDevicesInUse: USERPID ACCESS COMMAND /dev/snd/pcmC0D0c: martin 3049 F...m pulseaudio /dev/snd/pcmC0D0p: martin 3049 F...m pulseaudio /dev/snd/controlC0: martin 3049 F pulseaudio Date: Thu Sep 22 09:42:56 2016 EcryptfsInUse: Yes MachineType: LENOVO 2324CTO ProcEnviron: TERM=linux PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=de_DE.UTF-8 SHELL=/bin/bash ProcFB: 0 inteldrmfb ProcKernelCmdLine: BOOT_IMAGE=/@/boot/vmlinuz-4.8.0-11-generic.efi.signed root=UUID=f86539b0-3a1b-4372-83b0-acdd029ade68 ro rootflags=subvol=@ systemd.debug-shell RelatedPackageVersions: linux-restricted-modules-4.8.0-11-generic N/A linux-backports-modules-4.8.0-11-generic N/A linux-firmware1.161 SourcePackage: linux UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 07/09/2013 dmi.bios.vendor: LENOVO dmi.bios.version: G2ET95WW (2.55 ) dmi.board.asset.tag: Not Available dmi.board.name: 2324CTO dmi.board.vendor: LENOVO dmi.board.version: 0B98401 Pro dmi.chassis.asset.tag: No Asset Information dmi.chassis.type: 10 dmi.chassis.vendor: LENOVO dmi.chassis.version: Not Available dmi.modalias: dmi:bvnLENOVO:bvrG2ET95WW(2.55):bd07/09/2013:svnLENOVO:pn2324CTO:pvrThinkPadX230:rvnLENOVO:rn2324CTO:rvr0B98401Pro:cvnLENOVO:ct10:cvrNotAvailable: dmi.product.name: 2324CTO dmi.product.version: ThinkPad X230 dmi.sys.vendor: LENOVO To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1626436/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1626679] Re: NVMe triggering kernel panic followed by "bad: scheduling from the idle thread!"
Paul - have you confirmed that you are no longer seeing the issue? If yes, please update this bug with the info. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1626679 Title: NVMe triggering kernel panic followed by "bad: scheduling from the idle thread!" Status in linux package in Ubuntu: Triaged Bug description: On an NVMe system I'm using, Ubuntu 16.04.1 regularly seems to trigger off a kernel panic against somepart of the NVMe driver it looks like, after which the logs get filled with entries over and over again of: "bad: scheduling from the idle thread!" Here's the initial stack trace that seems to trigger off the bug: Sep 22 15:51:46 ubuntu kernel: [ 97.478175] [ cut here ] Sep 22 15:51:46 ubuntu kernel: [ 97.478185] WARNING: CPU: 13 PID: 0 at /build/linux-dcxD3m/linux-4.4.0/kernel/irq/manage.c:1438 __free_irq+0x1d2/0x280() Sep 22 15:51:46 ubuntu kernel: [ 97.478188] Trying to free IRQ 38 from IRQ context! Sep 22 15:51:46 ubuntu kernel: [ 97.478191] Modules linked in: nls_iso8859_1 ipmi_ssif intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass ioatdma me i_me sb_edac shpchp edac_core lpc_ich mei 8250_fintek ipmi_msghandler mac_hid ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr autofs4 btrfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul ixgbe crc32_pclmu l dca vxlan aesni_intel ip6_udp_tunnel udp_tunnel aes_x86_64 lrw gf128mul ptp glue_helper ahci ablk_helper pps_core cryptd nvme libahci mdio wmi fjes Sep 22 15:51:46 ubuntu kernel: [ 97.478257] CPU: 13 PID: 0 Comm: swapper/13 Not tainted 4.4.0-31-generic #50-Ubuntu Sep 22 15:51:46 ubuntu kernel: [ 97.478260] Hardware name: Oracle Corporation ORACLE SERVER X5-2/ASM,MOTHERBOARD,1U, BIOS 30080100 04/13/2016 Sep 22 15:51:46 ubuntu kernel: [ 97.478263] 0286 4fea3140a01056a3 883f7f743b10 813f1143 Sep 22 15:51:46 ubuntu kernel: [ 97.478267] 883f7f743b58 81cb61f8 883f7f743b48 81081102 Sep 22 15:51:46 ubuntu kernel: [ 97.478271] 0026 883f5b2ea700 0026 Sep 22 15:51:46 ubuntu kernel: [ 97.478275] Call Trace: Sep 22 15:51:46 ubuntu kernel: [ 97.478277][] dump_stack+0x63/0x90 Sep 22 15:51:46 ubuntu kernel: [ 97.478290] [] warn_slowpath_common+0x82/0xc0 Sep 22 15:51:46 ubuntu kernel: [ 97.478294] [] warn_slowpath_fmt+0x5c/0x80 Sep 22 15:51:46 ubuntu kernel: [ 97.478299] [] ? try_to_grab_pending+0xb3/0x160 Sep 22 15:51:46 ubuntu kernel: [ 97.478302] [] __free_irq+0x1d2/0x280 Sep 22 15:51:46 ubuntu kernel: [ 97.478306] [] free_irq+0x3c/0x90 Sep 22 15:51:46 ubuntu kernel: [ 97.478314] [] nvme_suspend_queue+0x89/0xb0 [nvme] Sep 22 15:51:46 ubuntu kernel: [ 97.478320] [] nvme_disable_admin_queue+0x27/0x90 [nvme] Sep 22 15:51:46 ubuntu kernel: [ 97.478325] [] nvme_dev_disable+0x29e/0x2c0 [nvme] Sep 22 15:51:46 ubuntu kernel: [ 97.478330] [] ? __nvme_process_cq+0x210/0x210 [nvme] Sep 22 15:51:46 ubuntu kernel: [ 97.478334] [] ? dev_warn+0x6c/0x90 Sep 22 15:51:46 ubuntu kernel: [ 97.478340] [] nvme_timeout+0x110/0x1d0 [nvme] Sep 22 15:51:46 ubuntu kernel: [ 97.478344] [] ? cpumask_next_and+0x2f/0x40 Sep 22 15:51:46 ubuntu kernel: [ 97.478348] [] ? load_balance+0x18c/0x980 Sep 22 15:51:46 ubuntu kernel: [ 97.478354] [] blk_mq_rq_timed_out+0x2f/0x70 Sep 22 15:51:46 ubuntu kernel: [ 97.478358] [] blk_mq_check_expired+0x4e/0x80 Sep 22 15:51:46 ubuntu kernel: [ 97.478363] [] bt_for_each+0xd8/0xe0 Sep 22 15:51:46 ubuntu kernel: [ 97.478367] [] ? blk_mq_rq_timed_out+0x70/0x70 Sep 22 15:51:46 ubuntu kernel: [ 97.478370] [] ? blk_mq_rq_timed_out+0x70/0x70 Sep 22 15:51:46 ubuntu kernel: [ 97.478375] [] blk_mq_queue_tag_busy_iter+0x47/0xc0 Sep 22 15:51:46 ubuntu kernel: [ 97.478379] [] ? blk_mq_attempt_merge+0xb0/0xb0 Sep 22 15:51:46 ubuntu kernel: [ 97.478383] [] blk_mq_rq_timer+0x41/0xf0 Sep 22 15:51:46 ubuntu kernel: [ 97.478389] [] call_timer_fn+0x35/0x120 Sep 22 15:51:46 ubuntu kernel: [ 97.478393] [] ? blk_mq_attempt_merge+0xb0/0xb0 Sep 22 15:51:46 ubuntu kernel: [ 97.478397] [] run_timer_softirq+0x23a/0x2f0 Sep 22 15:51:46 ubuntu kernel: [ 97.478403] [] __do_softirq+0x101/0x290 Sep 22 15:51:46 ubuntu kernel: [ 97.478407] [] irq_exit+0xa3/0xb0 Sep 22 15:51:46 ubuntu kernel: [ 97.478413] [] smp_apic_timer_interrupt+0x42/0x50 Sep 22 15:51:46 ubuntu kernel: [ 97.478417] [] apic_timer_interrupt+0x82/0x90 Sep 22 15:51:46 ubuntu kernel: [ 97.478419][] ? cpuidle_enter_state+0x111/0x2b0 Sep 22 15:51:46 ubun