>>>>> "dm" == David Magda <dma...@ee.ryerson.ca> writes:
dm> Given that ZFS is always consistent on-disk, why would you dm> lose a pool if you lose the ZIL and/or cache file? because of lazy assertions inside 'zpool import'. you are right there is no fundamental reason for it---it's just code that doesn't exist. If you are a developer you can probably still recover your pool, but there aren't any commands with a supported interface to do it. 'zpool.cache' doesn't contain magical information, but it allows you to pass through a different code path that doesn't include the ``BrrkBrrk, omg panic device missing, BAIL OUT HERE'' checks. I don't think squirreling away copies of zpool.cache is a great way to make your pool safe from slog failures because there may be other things about the different manual 'zpool import' codepath that you need during a disaster, like -F, which will remain inaccessible to you if you rely on some saving-your-zpool.cache hack, even if your hack ends up actually working when the time comes, which it might not. I think is really interesting, the case of an HA cluster using a single-device slog made from a ramdisk on the passive node. This case would also become safer if slogs were fully disposeable.
pgpmcPw2Mcugv.pgp
Description: PGP signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss