Only one pool containing both the actual datasets and the ones with canmount=noauto (backups) that have the same mountpoint (there's a separate boot pool but it doesn't matter in this circumstance)
I don't use zfs-import-cache since it's a single pool that contains the root so it's in the kernel cmdline and imported at that point. I'll try and find the time to reproduce in a VM this week... On Tue, 2020-06-09 at 17:06 -0500, Richard Laager wrote: > On 6/7/20 3:12 PM, wxcafe wrote: > > The systemd zfs-mount-generator script > > (/lib/systemd/system-generators/zfs-mount-generator) can break > > system > > boot if there are multiple datasets with the same mountpoint, > > because it > > ignores the zfs property canmount=noauto. > > It certainly does not "ignore" canmount=noauto. There's all kinds of > logic in the generator to deal with canmount=noauto. > > > I store backups on my system and after upgrading the system > > wouldn't > > boot anymore because while my backups are canmount=noauto, the > > generator > > was trying to mount multiple datasets to the same mountpoints (/, > > /usr/, > > ...) which obviously breaks... everything. > > If you have datasets marked as canmount=on, they should take > precedence > over any marked canmount=noauto for the same mountpoint. > > Are there multiple pools involved here, or just one? > > Can you provide a copy of your cache file(s) from /etc/zfs/zfs- > list.cache? > -- Wxcafé <[email protected]>

