| Is it really true that as the guy on the above link states (Please | read the link, sorry) when one iSCSI mirror goes off line, the | initiator system will panic? Or even worse, not boot its self cleanly | after such a panic? How could this be? Anyone else with experience | with iSCSI based ZFS mirrors?
Our experience with Solaris 10U4 and iSCSI targets is that Solaris only panics if the pool fails entirely (eg, you lose both/all mirrors in a mirrored vdev). The fix for this is in current OpenSolaris builds, and we have been told by our Sun support people that it will (only) appear in Solaris 10 U6, apparently scheduled for sometime around fall. My experience is that Solaris will normally recover after the panic and reboot, although failed ZFS pools will be completely inaccessible as you'd expect. However, there are two gotchas: * under at least some circumstances, a completely inaccessible iSCSI target (as you might get with, eg, a switch failure) will stall booting for a significant length of time (tens of minutes, depending on how many iSCSI disks you have on it). * if a ZFS pool's storage is present but unwritable for some reason, Solaris 10 U4 will panic the moment it tries to bring the pool up; you will wind up stuck in a perpetual 'boot, panic, reboot, ...' cycle until you forcibly remove the storage entirely somehow. The second issue is presumably fixed as part of the general fix of 'ZFS panics on pool failure', although we haven't tested it explicitly. I don't know if the first issue is fixed in current Nevada builds. - cks _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss