--As of February 19, 2011 12:01:37 PM +0000, Matthew Seaman is alleged to have said:

Let's say I install a FreeBSD system using a ZFS-only filesystem into a
box with hotswapable hard drives, configured with some redundancy.  Time
passes, one of the drives fails, and it is replaced and rebuilt using the
ZFS tools.  (Possibly on auto, or possibly by just doing a 'zpool
replace'.)

Is that box still bootable?  (It's still running, but could it *boot*?)

Why wouldn't it be?  The configuration in the Wiki article sets aside a
small freebsd-boot partition on each drive, and the instructions tell
you to install boot blocks as part of that partitioning process.  You
would have to repeat those steps when you install your replacement drive
before you added the new disk into your zpool.

So long as the BIOS can read the bootcode from one or other drives, and
can then access /boot/zfs/zpool.cache to learn about what zpools you
have, then the system should boot.

So, assuming a forgetful sysadmin (or someone who is new didn't know about the setup in the first place) is that a yes or a no for the one-drive replaced case?

It definitely is a 'no' for the all-drives replaced case, as I suspected: You would need to have repeated the partitioning manually. (And not letting ZFS handle it.)

If not, what's the minimum needed to support booting from another disk,
and using the ZFS filesystem for everything else?

This situation is described in the Boot ZFS system from UFS article
here: http://wiki.freebsd.org/RootOnZFS/UFSBoot

I use this sort of setup for one system where the zpool has too many
drives in it for the BIOS to cope with; works very well booting from a
USB key.

Thanks; I wasn't sure if that procedure would work if the bootloader was on a different physical disk than the rest of the filesystem. Nice to hear from someone who's tried it that it works. ;)

In fact, while the partitioning layout described in the
http://wiki.freebsd.org/RootOnZFS articles is great for holding the OS
and making it bootable, for using ZFS to manage serious quantities of
disk storage, other strategies might be better.  It would probably be a
good idea to have two zpools: one for the bulk of the space built from
whole disks (ie. without using gpart or similar partitioning), in
addition to your bootable zroot pool.  Quite apart from wringing the
maximum usable space out of your available disks, this also makes it
much easier to replace failed disks or use hot spares.

If a single disk failure in the zpool can render the machine unbootable, it's better yet to have a dedicated bootloader drive: It increases the mean time between failures of your boot device (and therefore your machine), and it reduces the 'gotcha' value. In a hot-swap environment booting directly off of ZFS you could fail a reboot a month (or more...) after the disk replacement, and finding your problem then will be a headache until someone remembers this setup tidbit.

If the 'fail to boot' only happens once *all* the original drives have been replaced the mean time between failures is better in the ZFS situation, but the 'gotcha' value becomes absolutely huge: Since you can replace one (or two, or more) disks without issue, the problem will likely take years to develop.

Ah well, price of the bleeding edge.  ;)

Daniel T. Staal

---------------------------------------------------------------
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---------------------------------------------------------------
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[email protected]"

Reply via email to