There is some angles of attack here: └─zfs-import-cache.service @1min 786ms +272ms └─systemd-udev-settle.service @494ms +1min 275ms
This is a case of "you asked for it, you got it", really. udev-settle is a workaround for old software that wasn't written with hotplugging in mind. It's 2016, and we really should avoid introducing new dependencies to udev-settle, as it's conceptually broken and can't work. This totally does not work for any hotplugged device, or a device which just gets detected by the kernel late, or which needs to be unwrapped from building an LVM or cryptsetup device, etc. I think it would be much more elegant to instead create udev rules (or systemd units) which load the ZFS cache as soon as a ZFS block device is detected. Would that be possible/reasonable? I don't know anything about what import-cache does, but I hope/suppose it has some way of doing this on a per-device basis? └─systemd-udev-trigger.service @415ms +55ms This is cloud-init's rules for blocking udev rules until network devices get detected and cloud-init's naming rules get applied. This is a hairy topic and Scott and I already discussed this at length. I think there's a better solution here, but I doubt we'll completely turn this around by the release. This just illustrates one of the problems of the "waiting for all hardware just to single out a particular device" approach from above -- this waits on stuff which is completely unrelated to zfs. ** No longer affects: systemd (Ubuntu) -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1571761 Title: zfs-import-cache.service slows boot by 60 seconds Status in cloud-init package in Ubuntu: Confirmed Status in zfs-linux package in Ubuntu: Confirmed Bug description: Fresh uvt-kvm guest, then $ sudo apt-get install zfsutils-linux $ sudo reboot The reboot will show on console waiting for tasks, then after ~ 60 seconds it continues boot. Logging in shows: $ sudo systemd-analyze critical-chain zfs-mount.service The time after the unit is active or started is printed after the "@" character. The time the unit takes to start is printed after the "+" character. zfs-mount.service +81ms └─zfs-import-cache.service @1min 786ms +272ms └─systemd-udev-settle.service @494ms +1min 275ms └─systemd-udev-trigger.service @415ms +55ms └─systemd-udevd-control.socket @260ms └─-.mount @124ms └─system.slice @129ms └─-.slice @124ms Seems possibly related / or some discussion at https://github.com/zfsonlinux/zfs/issues/2368 ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: zfsutils-linux 0.6.5.6-0ubuntu8 ProcVersionSignature: User Name 4.4.0-18.34-generic 4.4.6 Uname: Linux 4.4.0-18-generic x86_64 NonfreeKernelModules: zfs zunicode zcommon znvpair zavl ApportVersion: 2.20.1-0ubuntu1 Architecture: amd64 Date: Mon Apr 18 16:42:35 2016 ProcEnviron: TERM=screen.xterm-256color PATH=(custom, no user) XDG_RUNTIME_DIR=<set> LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: zfs-linux UpgradeStatus: No upgrade log present (probably fresh install) modified.conffile..etc.sudoers.d.zfs: [deleted] To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1571761/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp