The mailing list eventually went nowhere, including someone helping me
out privately.  He seemed to think it was a udev issue, which was highly
amusing as the udev (and systemd) binaries were running from the very
devices they believed weren't ready yet.

Here is the final scorecard.  I have 3 systems, all using btrfs with
raid0 across two devices.  Both root and home are separate subvolumes on
that.

System 1 (workstation - above report): systemd brings the system up but
times out waiting to mount home.  Setting /home to nofail in fstab and
then running mount -a from a console works just fine.  This is the only
system where the two drives are identical - they are exactly the same
SSD models and firmware versions on the same controller.  (Note system
fails to boot if they are on different controllers which wasn't a
problem with earlier Ubuntu versions.)  Booting with upstart works fine
and is how I have now configured the system.

System 2 (laptop):  the two devices are wrapped in LUKS+dmcrypt.  They
are decrypted at boot and then there is a pause, and then finally
everything comes up.  Most of the time that pause is 3 minutes, but
sometimes is a few seconds.  Again no problems in earlier Ubuntu
releases.

System 3 (server): Like system 1, it is bare devices but in this case
they are about the same capacity but from different vendors.  Everything
works perfectly, and if anything boots a bit faster than earlier Ubuntu
releases.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1447879

Title:
  fscking btrfs consisting of multiple partitions fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1447879/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to