This has been validated for B,F,I,J:
https://pastebin.ubuntu.com/p/cmK87VJwVB/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1971141
Title:
Cannot create /dev/disk/azure/resource softlinks in Gen2
Validation of package in Kinetic. The validation was run on Jammy with
the Kinetic package installed:
I have confirmed the bug on a 22.04 image running on AWS:
Jammy stock ec2-instance-connect ssh -vvv output:
https://pastebin.ubuntu.com/p/jDCnKrGRFM/plain/
When upgrading the package on the same
** Package changed: kexec-tools (Ubuntu) => kdump-tools (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908090
Title:
ubuntu 20.04 kdump fails
To manage notifications about this bug go to:
@ludo, this package update may have taken a day or two to be reflected
in https://packages.ubuntu.com/bionic/openjdk-11-jdk
As you can see, it is there now:
openjdk-11-jdk (11.0.14.1+1-0ubuntu1~18.04)
And available in the repositories:
openjdk-11-jdk | 11.0.14.1+1-0ubuntu1~18.04 | bionic-securit
For anyone that finds this in a search:
Bionic LP tracking this commit release:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1963717
Focal:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1964422
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
I've seen the issues with backporting the upstream fix into 2.2 and it
does not look like something that is recommended. I'm working with the
customer to see if the workaround of using `--disable-
luks2-reencryption` to mitigate their security concerns is a good
solution.
--
You received this bug
Public bug reported:
This bug is to request the Security team address a CVE. The information
required is located in the following document:
https://docs.google.com/document/d/1pnH9UIQwgTYMKOB__xTEOyPy4K3BGK8OAbMpPwtjUqc/
I don't have the option to select the checkbox for "This bug is a
security
Works for me as well. Thank you for the quick fix. I thought I was going
to have a run a -proposed kernel package for a few weeks.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1956401
Title:
amdgpu
@Kelsey, thank you for the quick proposed kernel. I installed it in my
system here and it solved my amdgpu errors completely. It boots quickly
again into the graphical login screen.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https
Same here with AMD Ryzen 5 3400G on kernel 5.13.0-23-generic
It never gets to the graphical login screen. I tested with xorg on
21.10, I did not test wayland. I expect that to be the same. I can do
Ctrl+Alt+F4 to get to a text login screen a few mins after booting. This
screen is where I saw the "
Public bug reported:
This bug is a security vulnerability.
I'm reporting this list of CVEs escalated by AWS according to the CVE
escalation policy.
They are requesting that CVE-2021-44224 and CVE-2021-44790 be
reclassified as High as per the NVD rating. The spreadsheet with the
official request
I have tested this on both Focal and Bionic VMs. I ensured that the
systemd-resolved unit was in a failed state, restarted it, added the
PPA, updated the systemd packages, and rebooted. After every reboot the
systemd-resolved was in an "active" state.
I tested this with at least 10 reboots on each
Public bug reported:
The /etc/cron.daily/aide script sets a variable $TMPBASE to "/run/aide".
Each time this script is run (daily), it moves the current data in
/run/aide/cron.daily to a directory with a random name:
/run/aide/cron.daily.old.XX.
When doing this, it preserves all the data
** Summary changed:
- Google Confidnetial Compute fails to boot with 1.47
+ Google Confidential Compute fails to boot with 1.47
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931254
Title:
Google C
Validation results. Proper number of vpus did show up in Bionic, Focal,
Groovy, and Xenial-4.15. The Xenial 4.4 kernel only saw 256 vcpus.
ubuntu@ip-172-31-5-48:~$ uname -r && nproc
4.4.0-1121-aws
144
ubuntu@ip-172-31-5-48:~$ uname -r && nproc
4.4.0-1122-aws
256
ubuntu@ip-172-31-5-48:~$ uname -
Running over a day with the packages from the PPA in place:
● isc-dhcp-server.service - ISC DHCP IPv4 server
Loaded: loaded (/lib/systemd/system/isc-dhcp-server.service; enabled;
vendor preset: enabled)
Active: active (running) since Tue 2020-08-04 16:23:19 UTC; 1 day 1h ago
Docs:
Issue filed against walinuxagent on GitHub:
https://github.com/Azure/WALinuxAgent/issues/1673
** Bug watch added: github.com/Azure/WALinuxAgent/issues #1673
https://github.com/Azure/WALinuxAgent/issues/1673
--
You received this bug notification because you are a member of Ubuntu
Bugs, which i
@pandeyvinod.india: Can you help us try to understand why your instances are
still panicing?
- Do you have a copy of the panic string?
- How did you update to the 0.11-4ubuntu2.6 package?
- Did you reboot the instance after the package was installed?
- Have any of the instances that previously
@jfirebaugh: According to https://usn.ubuntu.com/4360-2/
The CVE fix is in the latest version.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1878723
Title:
Kernel panic when used with upstart after
FYI, version 0.11-4ubuntu2.5 was recently released. If you are still
experiencing this issue, the following steps will correct it:
1. Remove the postinst file from the affected package (version 0.11-4ubuntu2.1):
sudo rm /var/lib/dpkg/info/libjson-c2\:amd64.postinst
2. Re-run the package configure
@cekkent: Did you reboot between the "dpkg --unpack..." and "sudo apt
install -f" steps?
That reboot is incredibly important since it switches out the init
process from the faulty one in 2.1 to the one in the 2.2 package.
The "apt install -f" is intended to run the pre/post install scripts for
th
** Attachment added: "zfs_list_-o_space_-r_bpool"
https://bugs.launchpad.net/ubuntu/+source/zsys/+bug/1876334/+attachment/5364538/+files/zfs_list_-o_space_-r_bpool
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launch
** Attachment added: "zfs_list_-t_snap_-r_rpool"
https://bugs.launchpad.net/ubuntu/+source/zsys/+bug/1876334/+attachment/5364536/+files/zfs_list_-t_snap_-r_rpool
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpa
** Attachment added: "zfs_list_-t_snap_-r_bpool"
https://bugs.launchpad.net/ubuntu/+source/zsys/+bug/1876334/+attachment/5364537/+files/zfs_list_-t_snap_-r_bpool
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpa
** Attachment added: "Sosreport from laptop with same symptom"
https://bugs.launchpad.net/ubuntu/+source/zsys/+bug/1876334/+attachment/5364535/+files/sosreport-neuromancer-2020-05-01-mejivuj.tar.xz
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscri
Public bug reported:
I had been running Focal since the March 4th and installed it via the
daily ISO at that time. I opted for ZFS root during the installation
process. It was mostly the reason why I went with Focal in the first
place.
I ran my frequent, almost daily, apt upgrades to get the late
sosreport from an instance launched to reproduce this
** Attachment added:
"sosreport-ip-172-31-0-228.hibernate-debugging-20200401165040.tar.xz"
https://bugs.launchpad.net/ubuntu/+source/ec2-hibinit-agent/+bug/1869761/+attachment/5358706/+files/sosreport-ip-172-31-0-228.hibernate-debugging-20
cnewcome@wintermute:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS"
cnewcome@wintermute:~$ cat /proc/version
Linux version 5.3.0-42-generic (buildd@lcy01-amd64-019) (gcc version 7.4.0
(Ubuntu 7.4.0-1ubuntu1~18.04.1))
gcc package upgraded in 18.04 and new kernel compiled with that version
hasn't been released yet.
** Package changed: gcc-defaults (Ubuntu) => linux (Ubuntu)
** Also affects: linux (Ubuntu Bionic)
Importance: Undecided
Status: New
--
You received this bug notification because you are
** Changed in: walinuxagent (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1866407
Title:
GPU Driver extension issue (NVIDIA)
To manage notifications about t
** Attachment added: "repository file created by following the manual Nvidia
instructions"
https://bugs.launchpad.net/ubuntu/+source/walinuxagent/+bug/1866407/+attachment/5337036/+files/cuda-manual.list
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is s
Adding repository files from each instance
** Attachment added: "repository file created by walinuxagent"
https://bugs.launchpad.net/ubuntu/+source/walinuxagent/+bug/1866407/+attachment/5337035/+files/cuda.list
--
You received this bug notification because you are a member of Ubuntu
Bugs, wh
The issue I found is that the walinuxagent Extension installs the 16.04
cuda drivers on Ubuntu 18.04. The issue here is the driver has changed a
bit with the inclusion of libglx-mesa0 in Ubuntu 18.04, where it was not
previously in 16.04.
This causes the failure to install nvidia-driver-* package.
Public bug reported:
This is affecting my MAAS installation to be able to create VMs through
the pods interface.
cnewcome@neuromancer:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu Focal Fossa (development branch)"
cnewcome@neur
We had similar behavior on GCE by running Elasticsearch through the following
suite of unit tests:
https://github.com/elastic/rally-eventdata-track
The test suite takes about 5 days to run fully and any corruption can be found
by running:
zgrep CorruptIndexException /var/log/elasticsearch/elasti
Public bug reported:
While installing linux-azure package, it installs linux-cloud-tools-
azure
Bionic + linux-azure
The following NEW packages will be installed:
linux-azure linux-azure-cloud-tools-5.0.0-1024 linux-azure-headers-5.0.0-1024
linux-azure-tools-5.0.0-1024 linux-cloud-tools-5.0.0-10
I don't believe this is anything new with the -34 kernel. I accidentally
installed the -32 kernel because I didn't save the sources.list file
with -proposed enabled and it gave me the same error. It appears to be
because the image is not set up for encryption and the cryptsetup-
initramfs package i
Disco verification: SUCCESS (except for crypt error not seen on Eoan
when running update-initramfs)
The package did experience a crypt error while the finish scripts were running
update-initramfs:
update-initramfs: Generating /boot/initrd.img-5.0.0-34-generic
cryptsetup: WARNING: The initramfs im
Eoan Verification: SUCCESS
Eoan:
I1107 01:13:14.719551 1 iotest.go:34] Starting loop...
I1107 01:13:14.719636 1 iotest.go:38] Generating chunk done. Took:
29.737µs
I1107 01:13:14.719693 1 iotest.go:45] Writing chunk done. Took: 44.206µs
I1107 01:13:14.723320 1 iotest.go:49
This was reproduced with the linux-generic kernels. The generic kernels
that experience this behavior are: 5.0.0-31-generic and
5.3.0-16-generic.
This behavior was corrected when the following upstream commits were added to
the lttng-modules package in Bionic:
2ca0c84f0b ("Fix: mm: create the new
VALIDATION: BIONIC: PASSED
See attached typescript output.
The snmpd package works as intended on Bionic. The snmpwalk returns the
data expected and it doesn't force direct-mapped autofs filesystems to
mount every time you run the snmpwalk.
** Attachment added: "bionic_validate.txt"
https://
VALIDATION: XENIAL: PASSED
See attached typescript output.
The snmpd package works as intended on Xenial. The snmpwalk returns the
data expected and it doesn't force direct-mapped autofs filesystems to
mount every time you run the snmpwalk.
** Attachment added: "xenial_validate.txt"
https://
VALIDATION: DISCO: PASSED
See attached typescript output.
The snmpd package works as intended on Disco. The snmpwalk returns the
data expected and it doesn't force direct-mapped autofs filesystems to
mount every time you run the snmpwalk.
** Attachment added: "disco_validate.txt"
https://bug
I also tested the test package,
5.7.3+dfsg-1.8ubuntu3.2+testpkg20190906b3, in a direct-mapped autofs
environment that was the basis for the fix that caused this regression.
The test package does fix the original autofs "mount everything" issue
and allows for snmpwalk to monitor the mounted disks a
** Attachment added: "Bionic typescript output"
https://bugs.launchpad.net/ubuntu/+source/net-snmp/+bug/1835818/+attachment/5286499/+files/bionic_validate.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.n
** Attachment added: "Disco typescript output"
https://bugs.launchpad.net/ubuntu/+source/net-snmp/+bug/1835818/+attachment/5286500/+files/disco_validate.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net
Validation:
I can confirm the fix for these is working on xenial, bionic, and disco. I will
include typescript outputs of each Ubuntu version.
** Attachment added: "Xenial typescript output"
https://bugs.launchpad.net/ubuntu/+source/net-snmp/+bug/1835818/+attachment/5286498/+files/xenial_vali
Public bug reported:
Background: Napi_tx is a Linux kernel feature that makes the virtio
driver call the skb destructor after the packets are actually “out”
(i.e., at TX completion interrupt), as opposed to immediately after the
packets are enqueued. This provides socket backpressure and is critic
Public bug reported:
Our internal cluster has run into a few ceph client related issues, which were
root caused to be resolved by the following commits:
https://github.com/ceph/ceph-client/commit/f42a774a2123e6b29bb0ca296e166d0f089e9113
https://github.com/ceph/ceph-client/commit/093ea205acd4b047c
** Attachment added: "cloud-init collect-logs"
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1830740/+attachment/5271884/+files/cloud-init.tar.gz
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.n
I created a test instance and was able to produce most of the data
requested. Attachments incoming.
** Attachment added: "cloud-init analyze show"
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1830740/+attachment/5271877/+files/ci-analyze-show.txt
--
You received this bug notifi
** Attachment added: "cloud-init analyze blame"
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1830740/+attachment/5271879/+files/ci-analyze-blame.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchp
** Attachment added: "cloud-init analyze dump"
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1830740/+attachment/5271878/+files/ci-analyze-dump.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad
This sosreport contains the systemd analyze blame output. It is from a
test instance that I was able to reproduce this issue on.
** Attachment added: "sosreport-cnewcomer-1559258602-20190530234322.tar.xz"
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1830740/+attachment/5271880/+f
issue.
If I run "sudo chvt 1", it will not switch out of that tty again until I
reboot my laptop again. Using 2 through 12 all work and switch the
display to that tty.
Thanks,
Chris Newcomer
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subsc
[ VERIFICATION COSMIC ]
libvirt VM:
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.10
DISTRIB_CODENAME=cosmic
DISTRIB_DESCRIPTION="Ubuntu 18.10"
$ dpkg -l | grep sosreport
ii sosreport 3.6-1ubuntu0.18.10.1
amd64Set of tools to gathe
[ VERIFICATION XENIAL ]
libvirt VM:
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
$ dpkg -l | grep sosreport
ii sosreport 3.6-1ubuntu0.16.04.1
amd64Set of t
[ VERIFICATION BIONIC ]
Bare metal:
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"
$ dpkg -l | grep sosreport
ii sosreport 3.6-1ubuntu0.18.04.1
amd64S
I forgot to include my test cloud info:
Openstack Queens running on Xenial
| 458939d4dc404a47a8a496b7d513a5ab | RegionOne | keystone | identity
| True| internal | http://10.5.0.18:5000/v3 |
| 930b5cf107724f87845a645fc625fc07 | RegionOne | keystone | identity
Hi,
I've had a chance to test the fix for this. I was able to confirm that
the 1.2 version of the package had an issue with the v3 keystone API. I
then installed the 1.4 version of the package and it was able to
successfully sync the images.
glance-simplestreams-sync log file included
** Attachm
Testing results:
LXD containers:
Xenial: ran and created an output file that looked fine. It outputted a single
seemingly harmless error during run: [plugin:kvm] debugfs not mounted and mount
attempt failed
Bionic: ran with no errors, output file appeared fine
Cosmic: ran with no errors, output
Ran 3.5 version for Trusty, Xenial, and Artful on a VM that was MAAS-
deployed. I have not been able to discover any issues. I also installed
it on a bare-metal server running Xenial with no problems as well.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Trusty:
NOTE: This is a pass for the ubuntu-advantage-tools script. The
duplicate machine-id issue I had was traced back to the canonical-
livepatch snap.
root@test4:~# dpkg -l ubuntu-advantage-tools
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-in
Trusty:
NOTE: This is a pass for the ubuntu-advantage-tools script. The
duplicate machine-id issue I had was traced back to the canonical-
livepatch snap.
root@test4:~# dpkg -l ubuntu-advantage-tools
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-in
Trusty:
NOTE: I tested it on a machine without the HWE kernel necessary and the
tool installed the kernel, but livepatch wasn't enabled after the reboot
into the new kernel. I had to remove the machine-id in order to get
live-patch working.
Once I did that, it worked as expected.
root@test4:~# d
Artful verification:
root@test1:~# dpkg -l ubuntu-advantage-tools
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version
Zesty verification:
root@test2:~# dpkg -l ubuntu-advantage-tools
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version
Xenial Verification:
root@test3:/home/ubuntu# dpkg -l ubuntu-advantage-tools
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ NameVersionA
Christian,
The customer that I was working with has changed their environment, so I
cannot go back to them. I will have to deploy openstack in our test lab
to verify the default settings. It will take some time for me to get
this checked out. I will reply back when I get it running. Thanks.
--
Y
This is the data that led me to believe the default setting was low. I
gathered this from a customer:
ubuntu@x:~$ virsh list | wc -l
212
ubuntu@x:~$ sudo lsof -p $(pgrep virtlogd) | wc -l
893
ubuntu@x:~$ sudo grep "open files" /proc/$(pgrep virtlogd)/limits
Max open files 1
Public bug reported:
Release:
Ubuntu 14.04.5 LTS
Package, version, and arch:
sosreport 3.1-1ubuntu2.2 amd64
Expected:
Have sosreport collect /sys/fs/pstore data.
What happened instead:
sosreport did not collect /sys/fs/pstore
Justificaton:
The pstore data should be collected upon sosre
71 matches
Mail list logo