On 9/15/06, can you guess? <[EMAIL PROTECTED]> wrote:
Implementing it at the directory and file levels would be even more flexible:  
redundancy strategy would no longer be tightly tied to path location, but 
directories and files could themselves still inherit defaults from the 
filesystem and pool when appropriate (but could be individually handled when 
desirable).

Ideally so.  FS (or dataset) level is sufficiently fine grain for my
use.  If I take the trouble to specify copies for a directory, I
really do not mind the trouble of creating a new dataset for it at the
same time.  file-level, however, is really pushing it.  You might end
up with an administrative nightmare deciphering which files have how
many copies.  I just do not see it being useful to my environment.

It would be interesting to know whether that would still be your experience in 
environments that regularly scrub active data as ZFS does (assuming that said 
experience was accumulated in environments that don't).  The theory behind 
scrubbing is that all data areas will be hit often enough that they won't have 
time to deteriorate (gradually) to the point where they can't be read at all, 
and early deterioration encountered during the scrub pass (or other access) in 
which they have only begun to become difficult to read will result in immediate 
revectoring (by the disk or, if not, by the file system) to healthier locations.

Scrubbing exercises the disk area to prevent bit-rot.  I do not think
ZFS's scrubbing changes the failure mode of the raw devices.  OTOH, I
really have no such experience to speak of *fingers crossed*.  I
failed to locate the code where the relocation of files happens but
assume that copies would make this process more reliable.

Since ZFS-style scrubbing detects even otherwise-indetectible 'silent 
corruption' missed by the disk's own ECC mechanisms, that lower-probability 
event is also covered (though my impression is that the probability of even a 
single such sector may be significantly lower than that of whole-disk failure, 
especially in laptop environments).

I do not any data to support nor dismiss that. Matt was right that
probability of failure modes is a huge can of worms that can drag
forever.


--
Just me,
Wire ...
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to