> > Most video formats are designed to handle
> > errors--they'll drop a frame
> > or two, but they'll resync quickly.  So, depending
> on
> > the size of the
> > error, there may be a visible glitch, but it'll
> keep
> > working.
> 
> Actually, Let's take MPEG as an example.  There are
> two basic frame types, anchor frames and predictive
> frames.  Of the predictive frames, there are one-way
> predictive and multi-way predictive.  The predictive
> frames offer significantly more compression than
> anchor frames, and thus are favored in higher
> compressed streams.  However, if an error occurs on a
> frame, that error will propagate until it either
> moves off the frame, or an anchor frame is reached.
> 
> In broadcast, they typically space the anchor frames
> every half second, to bound the time it takes to
> start a new stream when changing channels.  However,
> this also means that an error may take up to a half
> second to recover.  Depending upon the type of error,
> this could be confined to a single block, a stripe,
> or even a whole frame.
> 
> On more constraint bandwidth systems, like
> teleconferencing, I've seen anchor frames spaced as
> much as 30 seconds apart.  These usually included
> some minimal error concealment techniques, but aren't
> really robust.
> 
> So I guess it depends upon what you mean by "recover
> fast". It could be as short as a fraction of a
> second, but could be several seconds.

Ah - thanks to both of you.  My own knowledge of video format internals is so 
limited that I assumed most people here would be at least equally familiar with 
the notion that a flipped bit or two in a video would hardly qualify as any 
kind of disaster (or often even as being noticeable, unless one were searching 
for it, in the case of commercial-quality video).

David's comment about jpeg corruption would be more worrisome if it were clear 
that any significant number of 'consumers' (the immediate subject of my 
original comment in this area) had anything approaching 1 TB of jpegs on their 
systems (which at an average of 1 MB per jpeg would be around a million 
pictures...).  If you include 'image files of various sorts', as he did (though 
this also raises the question of whether we're still talking about 
'consumers'), then you also have to specify exactly how damaging single-bit 
errors are to those various 'sorts' (one might guess not very for the 
uncompressed formats that might well be taking up most of the space).  And 
since the CERN study seems to suggest that the vast majority of errors likely 
to be encountered at this level of incidence (and which could be caught by ZFS) 
are *detectable* errors, they'll (in the unlikely event that you encounter them 
at all) typically only result in requiring use of a RAID (or backup) copy 
(surely 
 one wouldn't be entrusting data of any real value to a single disk).

So I see no reason to change my suggestion that consumers just won't notice the 
level of increased reliability that ZFS offers in this area:  not only would 
the difference be nearly invisible even if the systems they ran on were 
otherwise perfect, but in the real world consumers have other reliability 
issues to worry about that occur multiple orders of magnitude more frequently 
than the kinds that ZFS protects against.

- bill
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to