AFAIK, zfs should be able to protect against (if the pool is redundant), or at 
least
detect, corruption from the point that it is handed the data, to the point
that the data is written to permanent storage, _provided_that_ the system
has ECC RAM (so it can detect and often correct random background-radiation
caused memory errors), and that, if zfs controls the whole disk and the disk
has a write cache, the disk correctly honors requests to flush the write cache
to permanent storage.  That should be just as true for a zvol as for a
regular zfs file.

What I'm trying to say is that zfs should give you a lot of protection in your
situation, but that it can do nothing about it if it is handed bad data: for
example, if the client is buggy and sends corrupt data, if somehow a network
error goes undetected (unlikely given that AFAIK iSCSI runs over TCP and
at least thus far never over UDP, and TCP always checksums (UDP might not)),
if the iSCSI server software corrupts data before writing it to disk, etc.

In other words, zfs probably gives more protection to a larger portion of the
data path than just about anything else, but in the case of a remote client,
whether iSCSI, NFS, CIFS, or whatever, the data path is longer and
distributed, and the verification that zfs does only covers part of that.

What I'm saying would _not_ apply if the client were doing zfs onto iSCSI
storage; in that case, the client's zfs would also be looking after data 
integrity.
So the closer to the data generating application that the integrity from that
point on is provided, the less places something bad can happen without being
at least detected.

Note: I can't guarantee that any of what I said is correct, although I would
be willing to risk my own data as if it were.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to