Miles Nordin wrote:
>>>>>> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>>>>>> "tb" == Tom Bird <[EMAIL PROTECTED]> writes:
> 
>     tb> There was a problem with the SAS bus which caused various
>     tb> errors including the inevitable kernel panic, the thing came
>     tb> back up with 3 out of 4 zfs mounted.
> 
>     re> In general, ZFS can only repair conditions for which it owns
>     re> data redundancy.
> 
> If that's really the excuse for this situation, then ZFS is not
> ``always consistent on the disk'' for single-VDEV pools.

This is wrong implication. ZFS does not write new (meta)data over 
currently allocated blocks, this is how on-disk consistency is achieved.

Recovery from corruption is another thing. Data that is read back may be 
not the one which was written in the first place, and ZFS has a facility 
to detect this - checksums. If there's more than one copy - there is 
good chance that another copy may be good. If there's only one copy - 
there's no much to do besides returning I/O error.

There's another failure scenario also - (meta)data may be corrupted in 
memory before it is checksummed and written to disk. In this case no 
matter how many copies are stored on disk, all of them are incorrect 
though they may still checksum properly.

> There was no loss of data here, just an interruption in the connection
> to the target, like power loss or any other unplanned shutdown.

Unfortunately this is an assumption only. Saying that there was no loss 
of data you assume that storage controller is bug-free or was not 
affected by SAS bus issues in any way. This may not be the case. But it 
is impossible to tell with the provided data.

victor
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to