To me it seems it's a special case that has not been accounted for... While is seems zfs is checking the disks against the pool and handle them nicely using labels/meta-data, even if they are mounted on different controllers, the problem I've encountered has to do with that a specific device/disk is flagged faulty and an already valid disk in the pool is mounted on that specific controller location.
Zfs was telling me that c2d0 went bad but zfs also want you to clear the error when you have 'fixed' it. In the case of an faulty disk and swapped controllers you get two mechanisms fighting for the same cause -- c2d0 is faulty and c2d0 is ok. This is why I want zpool to have an option to have some override control to clear an old entry that may not be valid anymore, due to moved disks or something similar. This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss