> On 9/13/06, Matthew Ahrens <[EMAIL PROTECTED]>
> wrote:
> > Sure, if you want *everything* in your pool to be
> mirrored, there is no
> > real need for this feature (you could argue that
> setting up the pool
> > would be easier if you didn't have to slice up the
> disk though).
> 
> Not necessarily.  Implementing this on the FS level
> will still allow
> the administrator to turn on copies on the entire
> pool if since the
> pool is technically also a FS and the property is
> inherited by child
> FS's.  Of course, this will allow the admin to turn
> off copies to the
> FS containing junk.

Implementing it at the directory and file levels would be even more flexible:  
redundancy strategy would no longer be tightly tied to path location, but 
directories and files could themselves still inherit defaults from the 
filesystem and pool when appropriate (but could be individually handled when 
desirable).

I've never understood why redundancy was a pool characteristic in ZFS - and the 
addition of 'ditto blocks' and now this new proposal (both of which introduce 
completely new forms of redundancy to compensate for the fact that pool-level 
redundancy doesn't satisfy some needs) just makes me more skeptical about it.

(Not that I intend in any way to minimize the effort it might take to change 
that decision now.)

> 
> > It could be recommended in some situations.  If you
> want to protect
> > against disk firmware errors, bit flips, part of
> the disk getting
> > scrogged, then mirroring on a single disk (whether
> via a mirror vdev or
> > copies=2) solves your problem.  Admittedly, these
> problems are probably
> > less common that whole-disk failure, which
> mirroring on a single disk
> > does not address.
> 
> I beg to differ from experience that the above errors
> are more common
> than whole disk failures.  It's just that we do not
> notice the disks
> are developing problems but panic when they finally
> fail completely.

It would be interesting to know whether that would still be your experience in 
environments that regularly scrub active data as ZFS does (assuming that said 
experience was accumulated in environments that don't).  The theory behind 
scrubbing is that all data areas will be hit often enough that they won't have 
time to deteriorate (gradually) to the point where they can't be read at all, 
and early deterioration encountered during the scrub pass (or other access) in 
which they have only begun to become difficult to read will result in immediate 
revectoring (by the disk or, if not, by the file system) to healthier locations.

Since ZFS-style scrubbing detects even otherwise-indetectible 'silent 
corruption' missed by the disk's own ECC mechanisms, that lower-probability 
event is also covered (though my impression is that the probability of even a 
single such sector may be significantly lower than that of whole-disk failure, 
especially in laptop environments).

All that being said, keeping multiple copies on a single disk of most metadata 
(the loss of which could lead to wide-spread data loss) definitely makes sense 
(especially given its typically negligible size), and it probably makes sense 
for some files as well.

- bill
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to