[Expired for grub2 (Ubuntu) because there has been no activity for 60
days.]
** Changed in: grub2 (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874
T
Hi,
Does this issue still persist?
If someone wants to submit a patch for this, I am open to reviewing it.
** Changed in: grub2 (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.l
A quick update, I might get a chance to dig into this again. I recently
noted that the issue persists in 22.04, via the beta installer.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874
Title:
I haven't had a chance to dig deeper, but I just noticed this same issue in
Focal Fossa.
If I get a chance to debug this I'll submit a patch here. I might get a
chance over the next week, during Thanksgiving break.
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Actually, I got caught by "imported_pools_ vs "all_pools". So yeah, this
is expected (this list is to cleanup, removing any temporary imported
pool by the script to list all systems available on grub).
The issues is below, (I don’t have time to debug it fully right now), but in
this part of set -
Didier,
That part didn't strike me as exceptional because the pool's already
mounted, since we're running update-grub from the running system. It's
not available to be listed or imported again.
I'll want to read 10_linux_zfs in depth to see what it's doing, but if
it's depending on a list to com
Thanks for getting back.
Indeed, as I told you, manual mount on /boot is supported (and backed by
a large testsuite).
I’m puzzled with your failing case: as you can see, we couldn’t import any
pools in the script:
+ zpool import -f -a -o cachefile=none -o readonly=on -N
+ err=no pools available
A quick test shows the issue not cropping up if I use an install with
inherited mountpoints in a more standard hierarchy. I haven't checked
out what's different.
tank/var/log /var/log zfs defaults 0 0
tank/tmp /tmp zfs defaults 0 0
/dev/md0 /boot ext4 defaults 0 1
/dev/mapper/swap none swap sw 0
Sure. This is a mode I've been using lately where I'm using legacy
mountpoints on datasets out of fstab. I suspect this would do the same
thing with a traditional inherited mount hierarchy.
# cat /etc/fstab
tank/zroot / zfs defaults 0 0
tank/home /home zfs defaults 0 0
tank/usr/src /usr/src zf
Thanks for reporting this bug and help making ubuntu better.
10_linux_zfs is supposed to be able to track ext4 /boot and handling it
correctly (via looking at fstab and others).
What would be interesting is to print your fstab, if we miss anything by
any chance. Also, can you set -x on top of 10_
** Package changed: grub (Ubuntu) => grub2 (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874
Title:
21.10 beta, errors in 10-linux and 10_linux_zfs
To manage notifications about this b
** Description changed:
-
In a custom install of Ubuntu 21.10 beta, both hardware and VM installs
suffer from a bug in the grub.d/10_linux and 10_linux_zfs scripts. (For
comparison, Debian Bullseye, running a similar version of grub, doesn't
have this issue.)
Unique to Ubuntu, there'
12 matches
Mail list logo