Mark Grant wrote:
Yeah, this is my main concern with moving from my cheap Linux server with no 
redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice 
as much to buy the 'enterprise' disks which appear to be exactly the same 
drives with a flag set in the firmware to limit read retries, but I also don't 
want to lose all my data because a sector fails and the drive hangs for a 
minute trying to relocate it, causing the file system to fall over.

I haven't found a definitive answer as to whether this will kill a ZFS RAID 
like it kills traditional hardware RAID or whether ZFS will recover after the 
drive stops attempting to relocate the sector. At least with a single drive 
setup the OS will eventually get an error response and the other files on the 
disk will be readable when I copy them over to a new drive.

I don't think ZFS does any timing out.
It's up to the drivers underneath to timeout and send an error back to ZFS - only they know what's reasonable for a given disk type and bus type. So I guess this may depend which drivers you are using. I don't know what the timeouts are, but I have observed them to be long in some cases when things do go wrong and timeouts and retries are triggered.

--
Andrew
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to