On Tue, Feb 16, 2010 at 06:28:05PM -0800, Richard Elling wrote:
> The problem is that MTBF measurements are only one part of the picture.
> Murphy's Law says something will go wrong, so also plan on backups.

+n

> > Imagine this scenario:
> > You lost 2 disks, and unfortunately you lost the 2 sides of a mirror.
> 
> Doing some simple math, and using the simple MTTDL[1] model, you can
> figure the probability of that happening in one year for a pair of 700k hours
> disks and a 24 hour MTTR as:
>       Pfailure =  0.000086%  (trust me, I've got a spreadsheet :-)

Which is close enough to zero, but doesn't consider all the other
things that can go wrong: power surge, fire, typing destructive
commands in the wrong window, animals and small children, capricious
deities, forgetting to run backups, etc.  

These small numbers just tell you to be more worried about defending
against the other stuff.

> > You have 2 choices to pick from:
> > - loose entirely Mary, Gary's and Kelly's "documents"
> > or
> > - loose a small piece of Everyone's "documents".

Back to the OP's question, it's worth making the distinction here
between "lose" as in not-able-to-recover-because-there-are-no-backups,
and some data being out of service and inaccessible for a while, until
restored.  Perhaps this is what "loose" means? :)

If the goal is partitioning "service disruption" rather than "data loss",
then splitting things into multiple pools and even multiple servers is
a valid tactic - as well as then allowing further opportunities via
failover. That's well covered ground, and one reason it's done at pool
level is that allows concrete reasoning about what will and won't be
affected in each scenario.  Setting preferences, such as copies or the
suggested similar alternatives, will never be able to provide the same
concrete assurance.

--
Dan.

Attachment: pgp0vRHODvQY3.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to