On Fri, Dec 11, 2015 at 11:04 AM, Jonathan Dowland <j...@debian.org> wrote: > On Thu, Dec 10, 2015 at 02:38:02PM +0100, Anders Andersson wrote: >> My question is thus: How am I supposed to solve this the "systemd >> way"? I want to be able to start an encrypted block device using a >> normal systemd service/device so that I can later have systemd units >> depend on this. > > So in theory what you've done should work for the first part, and you > would then need to create .mount units that depend on the crypt device, > like this (examples cribbed from my system doing exactly this):
Thank you for your reply! After having spent time digging in this some more I agree with your conclusion that it should work, and I think I have now found why it does not work in my case. Unlocking the disk fails half-way if there is no "valid" UUID on the LUKS *target*. A valid UUID is one that exists and is unique. This means that it fails in these somewhat common scenarios: *) Your LUKS device contains random numbers for a key, is zeroed out, or contains an unknown filesystem. *) It is one part of a multi-device btrfs filesystem (this is the killer for me) *) You are mounting an LVM snapshot to another mountpoint (I have not tested this) In a multi-device btrfs filesystem, each device has the same UUID. I'm not going to debate the sanity of that, this is how it is. The first to be unlocked will work, the rest will be technically unlocked, but then something happens because the UUID already exists from the first device, so they time out. I have created a clear test case for this using the latest release of Fedora and sent to the systemd mailinglist for comments, but apparently it is too obscure for someone to comment on, so I guess I have to find other people to bug. In the meantime, my workaround is to modify the service files to remove all references to /dev/mapper/xxx and instead use "Requires=crypt1.service crypt2.service" etc. This seems to work. Thank you for your time! // Anders