Hi Edward, Thank you for the feedback. All makes sense.
To clarify, yes, I snapshotted the VM within ESXi, not the filesystems within the pool. Unfortunately, because of my misunderstanding of how ESXi snapshotting works, I'm now left without the option of investigating whether the replaced disk could be used to create a new pool. For anyone interested, I removed the c8t1d0 disk from the VM, snapshotted, messed around a little, removed the 'corrupt' disks, added c8t1d0 back in, performed a 'zdb -l' which did show a disk of type 'replacing', with two children. That looked quite promising, but I wanted to wait until anyone had chipped in with some suggestions about how to recover from the replaced disk, so I decided to look at the corrupt data again. I reverted back to the snapshot in ESXi, bringing back my corrupt disks (as you'd expect), but which unfortunately *deleted* (!?) the VMDK files which related to c8t1d0. Not a ZFS/Solaris issue of any kind, I know, but one to watch out for potentially if anyone else is trying things out in this unsupported configuration. Shame I can't look into getting data back from the 'good' virtual disk - that's probably something I'd like answered so I might look into again once I've put this matter to bed. In the meantime, I'll see what I can do with dd_rescue or dd with 'noerror,sync' to produce some swiss-cheese VMDK files and see whether the content can be repaired. It's not the end of the world if they're gone, but I'd like to satisfy my own curiosity with this little exercise in recovery. Thanks again for the input, Chris -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss