Hi Mark,

I would recheck with fmdump to see if you have any persistent errors
on the second disk.

The fmdump command will display faults and fmdump -eV will display errors (persistent faults that have turned into errors based on some
criteria).

If fmdump -eV doesn't show any activity for that second disk, then
review /var/adm/messages or iostat -En for driver-level resets and
so on.

Thanks,

Cindy

On 08/16/10 18:53, Mark Bennett wrote:
Nothing like a "heart in mouth moment" to shave tears from your life.

I rebooted a snv_132 box in perfect heath, and it came back up with two FAULTED 
disks in the same vdisk group.

Everything an hour on Google I found basically said "your data is gone".

All 45Tb of it.

A postmortem of fmadm showed a single disk failed with smart predictive failure.
No indication why the second failed.

I don't give up easily, and it is now back up and scrubbing - no errors so far.

I checked both the drives were readable, so it didn't seem to be a hardware 
fault.
I moved one into a different server and ran a zpool import to see what it made 
of it.

The disk was ONLINE, and it's vdisk buddies were unavailable.
Ok, so I moved the disks into different bays and booted from the snv_134 cdrom.
Ran zpool import and the zpool came back with everything online.

That was encouraging, so I exported it and booted from the origional 132 boot 
drive.

Well, it came back, and at 1:00AM I was able to get back to the origional issue 
I was chasing.

So, don't give up hope when all hope appears to be lost.

Mark.

Still an Open_Solaris fan keen to help the community achieve a 2010 release on 
it's own.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to