Brandon High wrote:
"On Mon, May 3, 2010 at 4:33 PM, Michael Shadle <mike...@gmail.com> wrote:
Is ZFS doing it's magic checksumming and whatnot on this share, even
though it is seeing junk data (NTFS on top of iSCSI...) or am I not
getting any benefits from this setup at all (besides thin
provisioning, things like that?)

The data on disk is protected, but it's not protected over the wire.

You still get snapshots, cloning, and all the other zfs features for
the dataset though.

-B

If someone wrote a "ZFS client", it'd be possible to get over the wire data protection. This would be continuous from the client computer all the way to the storage device. Right now there is data protection from the server to the storage device. The best protected apps are those running on the same server that has mounted the ZFS pool containing the data they need (in which case they are protected by ZFS checksums and by ECC RAM, if present).

A "ZFS client" would run on the computer connecting to the ZFS server, in order to extend ZFS's protection and detection out across the network.

In one model, the ZFS client could be a proxy for communication between the client and the server running ZFS. It would extend the filesystem checksumming across the network, verifying checksums locally as data was requested, and calculating checksums locally before data was sent that the server would re-check. Recoverable checksum failures would be transparent except for performance loss, unrecoverable failures would be reported as unrecoverable using the standard OS unrecoverable checksum error message (Windows has one that it uses for bad sectors on drives and optical media). The local client checksum calculations would be useful in detecting network failures, and local hardware instability. (I.e. if most/all clients start seeing checksum failures...look at the network; if only one client sees checksum failures, check that client's hardware.)

An extension to the ZFS client model would allow multi-level ZFS systems to better coordinate their protection and recover from more scenarios. By multi-level ZFS, I mean ZFS stacked on ZFS, say via iSCSI. An example (I'm sure there are better ones) would be 3 servers, each with 3 data disks. Each disk is made into its own non-redundant pool (making 9 non-redundant pools). These pools are in turn shared via iSCSI. One of the servers creates RAIDZ1 groups using 1 disk from each of the 3 servers. With a means for ZFS systems to communicate, a failure of any non-redundant lower level device need not trigger a system halt of that lower system, because it will know from the higher level system that the device can be repaired/replaced using the higher level redundancy.

A key to making this happen is an interface to request a block and its related checksum (or if speaking of CIFS, to request a file, its related blocks, and their checksums.)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to