On 4-Mar-09, at 7:35 PM, Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
"gm" == Gary Mills <mi...@cc.umanitoba.ca> writes:
gm> I suppose my RFE for two-level ZFS should be included,
Not that my opinion counts for much, but I wasn't deaf to it---I did
respond.
I appreciate that.
I thought it was kind of based on mistaken understanding. It
included
this strangeness of the upper ZFS ``informing'' the lower one when
corruption had occured on the network, and the lower ZFS was supposed
to do something with the physical disks...to resolve corruption on
the
network? why? IIRC several others pointed out the same bogosity.
It's a simply a consequence of ZFS's end-to-end error detection.
There are many different components that could contribute to such
errors. Since only the lower ZFS has data redundancy, only it can
correct the error.
Why aren't application level checksums the answer to this "problem"?
--Toby
Of course, if something in the data path
consistently corrupts the data regardless of its origin, it won't be
able to correct the error. The same thing can happen in the simple
case, with one ZFS over physical disks.
It makes slightly more sense in the write direction than the read
direction maybe, but I still don't fully get the plan. It is a new
protocol to replace iSCSI? or NFS? or, what? Is it a re-invention
of pNFS or Lustre, but with more work since you're starting from
zero,
and less architectural foresight?
I deliberately did not specify the protocol to keep the concept
general. Anything that works and solves the problem would be good.
--
-Gary Mills- -Unix Support- -U of M Academic Computing and
Networking-
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss