This issue makes all of our bpftrace based observability tools non-
functional on our Ubuntu 24.04 machines, taking out important
troubleshooting ability. How did this package pass testing for an Ubuntu
24.04 SRU given that its bpftrace binary doesn't even run on an Ubuntu
24.04 machine?
(Yes I kn
Public bug reported:
The current Ubuntu 24.04 version of libllvm18:amd64 does not package a
libLLVM-18.so.18.1 shared library, only a symbolic link to it:
cks@sanspare:~$ ls -l /usr/lib/x86_64-linux-gnu/libLLVM-18.so*
lrwxrwxrwx 1 root root 15 May 27 2024 /usr/lib/x86_64-linux-gnu/libLLVM-18.so
I think I know what is happening here. In Ubuntu 20.04 and 22.04, the
grub-pc.postinst has a chunk of code that was designed to deal with bug
#1889556 by skipping running grub-install on package updates. The
initial commit comment by Steve Langasek says:
debian/postinst.in: Avoid calling grub-inst
I didn't change grub-pc/install-devices, and on our 22.04 BIOS MBR +
mirrored software RAID servers (of which we have a lot), it has the same
value (or the same sort of value, naming the md device). A random 22.04
server install is also using 'super 1.2' for its root /dev/md0 device
superblock form
Both 'dpkg-reconfigure grub-pc' then selecting the /dev/vd* disks and
manually running 'grub-install /dev/vda' (and then /dev/vdb) do work.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060695
Title:
Also, here's sgdisk output (the two disks have identical output apart from
names):
isk /dev/vda: 83886080 sectors, 40.0 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 08D25DA2-9B12-45EA-B7EE-0978D2780899
Partition table holds up to 128 entries
Main partition table begins
The RAID is on partitions, but there is no LVM involved. The layout was
set up through the 24.04 server installer with custom storage layout,
selecting both disks as boot disks, and then using all of their space as
a single partition for the software RAID. The software RAID itself is
unpartitioned.
Public bug reported:
I am testing the 24.04 pre-beta in a libvirt virtual machine with two
/dev/vd* disks set up as a single mirrored software RAID device,
/dev/md0, that is used for the root filesystem. Since this is a libvirt
install, it is using BIOS booting, not UEFI (maybe someday libvirt wil
It appears that you can work around this by installing the bpftrace
debugging symbols. After following the general directions of
https://wiki.ubuntu.com/Debug%20Symbol%20Packages I installed bpftrace-
dbgsym and now bpftrace appears to work.
--
You received this bug notification because you are a
This is apparently because the bpftrace binary is stripped. An issue
reporting a similar problem in the upstream is
https://github.com/iovisor/bpftrace/issues/954
** Bug watch added: github.com/iovisor/bpftrace/issues #954
https://github.com/iovisor/bpftrace/issues/954
--
You received this bu
Based on inspecting the ZFS source code, this looks like a ZFS inode
where there is a mismatch between the claimed size of the inode's (ZFS)
ACL and how it's stored. If you're willing to do some work, it might be
possible to narrow this down to identify a specific inode and what's
wrong with it, wh
Public bug reported:
If you have an Ubuntu 18.04 Amanda client machine, you have probably
only installed the amanda-client package (and then amanda-common, which
it depends on). If you try to run 'amrecover' on such a machine, you
will get a failure:
# amrecover -s -t -C
AMRECOVER Version 3.5.
Thanks for your encouragement. I've now filed this as an issue with
upstream systemd as https://github.com/systemd/systemd/issues/3943 .
** Bug watch added: github.com/systemd/systemd/issues #3943
https://github.com/systemd/systemd/issues/3943
--
You received this bug notification because you
I've confirmed this behavior on the yakkety live build you linked to
above, with systemd 231 according to its dpkg output. I gathered as much
data about it as I could think of (and I can go back for more if
necessary). Would you rather I pass the data to you here for you to file
an upstream bug wit
In 16.04 (and I think everywhere), /sys is sysfs, so its contents are
generated by the kernel, device drivers, and so on. Udev looks at sysfs
in order to determine device information (eg ATA port number) that it
uses to create everything else. How hardware is represented in sysfs can
change over ke
Here is the full 'udevadm test' output for two disks on the same port
multiplier channel. I can do a disk on a different channel as well if
you want.
On 12.04, the sysfs path of the same disk slot is
/sys/devices/pci:00/:00:01.0/:01:00.0/:02:00.0/host8/target8:0:0/8:0:0:0/block/sdk
Public bug reported:
We have a just-installed Ubuntu 16.04 LTS machine with a number of disks
behind port-multiplier eSATA ports, all of them driven by a SiI 3124
controller (sata_sil24 kernel driver). Our machine sees all disks on all
channels, however under 16.04 only one disk from each channel
When this happened to us during a mysql-server upgrade, the root
cause is that we had mysql (the server) disabled, ie 'systemctl disable
mysql'. When you do this, 'invoke-rc.d mysql start' winds up doing nothing
and not starting the server, which causes mysql_upgrade to fail because
it requires a r
Not all aspects of this PAM failure can be fixed easily, since daemons
other than cron were also affected. Cron can be restarted without user
impact, but something like xdm cannot be.
(Since we got hurt by the xdm issue, not by the cron issue, I am
rather sensitive about this.)
I personally think
We have an Ubuntu 10.04 machine with the original update applied where
xdm had not been restarted and was thus rejecting login attempts.
I can confirm that applying the just-released PAM update made xdm
accept logins again. The update also did not break cron, which had been
restarted and so was usi
@Marc: the previous libpam version (1.1.1-2ubuntu5 for 10.04 LTS)
doesn't seem to be available any more, or at least 'apt-get' can't
find it, which makes downgrading hard. We would have to roll all the
way back to 1.1.1-2ubuntu2 ... which is missing a root escalation CVE
(CVE-2010-0832, root priv e
This PAM issue also affects xdm, which no longer allows people to log in
(it syslogs the same error message). This has caused us serious problems
on our multiuser login servers, because of course we cannot simply
reboot the machines and restarting xdm has the pleasant side effect of
instantly loggi
** Attachment added: "testparm -s output"
https://bugs.launchpad.net/ubuntu/+source/samba/+bug/787755/+attachment/2141366/+files/testparm-s
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/787755
Tit
(this is the smb.conf constructed as described in the reproduction
section)
** Attachment added: "/etc/samba/smb.conf"
https://bugs.launchpad.net/ubuntu/+source/samba/+bug/787755/+attachment/2141365/+files/smb.conf
--
You received this bug notification because you are a member of Ubuntu
Bugs
** Attachment added: "smbclient -L output"
https://bugs.launchpad.net/bugs/787755/+attachment/2141346/+files/cksvm-smbclient
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/787755
Title:
Samba doe
Public bug reported:
Binary package hint: samba
Our environment uses a CUPS server that is separate from the machine
running Samba; the Samba machine has an /etc/cups/client.conf with
the ServerName of the CUPS server. In this setting, Samba does not
notice when you add or remove a CUPS printer,
I can report that the proposed kernel resolves what is either this issue or a
closely related
issue. On our Ubuntu 10.04 machines, unmounting an NFS filesystem takes a
significant
amount of time, on the order of several seconds to tens of seconds; my test
machine runs
between 15 seconds and 25 s
** Changed in: procmail (Ubuntu)
Status: Invalid => New
--
Procmail opens $HOME/.procmailrc before dropping setuid permissions
https://bugs.launchpad.net/bugs/407459
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs
Even if procmail closes and reopens the file later as non-root,
there are still two problems here. First, procmail has opened
(and closed) a file with root permissions. There are 'files'
where merely opening (and closing) them have side effects;
for example, pointing $HOME/.procmailrc at a rewindab
Whoops, I effectively got the version number and Ubuntu release
wrong, because I missed that we are still using a Dapper-derived
lighttpd.conf on our Hardy machines. (My apologies for the
confusion; I should have checked to be sure.)
The dapper lighttpd.conf says:
$HTTP["host"] == "localhost" {
Public bug reported:
Binary package hint: lighttpd
Ubuntu release: hardy (8.04)
Version: 1.4.19-0ubuntu3.1
The normal Ubuntu lighttpd configuration file exposes /usr/share/doc to
everyone who can talk to your web server, as the /doc/ URL, not just
people on the same machine
The lighttpd configu
31 matches
Mail list logo