On Jul 7, 2010, at 3:27 AM, Richard Elling wrote:

> 
> On Jul 6, 2010, at 10:02 AM, Sam Fourman Jr. wrote:
> 
>> Hello list,
>> 
>> I posted this a few days ago on opensolaris-discuss@ list
>> I am posting here, because there my be too much noise on other lists
>> 
>> I have been without this zfs set for a week now.
>> My main concern at this point,is it even possible to recover this zpool.
>> 
>> How does the metadata work? what tool could is use to rebuild the
>> corrupted parts
>> or even find out what parts are corrupted.
>> 
>> 
>> most but not all of these disks were Hitachi Retail 1TB didks.
>> 
>> 
>> I have a Fileserver that runs FreeBSD 8.1 (zfs v14)
>> after a poweroutage, I am unable to import my zpool named Network
>> my pool is made up of 6 1TB disks configured in raidz.
>> there is ~1.9TB of actual data on this pool.
>> 
>> I have loaded Open Solaris svn_134 on a seprate boot disk,
>> in hopes of recovering my zpool.
>> 
>> on Open Solaris 134, I am not able to import my zpool
>> almost everything I try gives me cannot import 'Network': I/O error
>> 
>> I have done quite a bit of searching, and I found that import -fFX
>> Network should work
>> however after ~ 20 hours this hard locks Open Solaris (however it does
>> return a ping)
>> 
>> here is a list of commands that I have run on Open Solaris
>> 
>> http://www.puffybsd.com/zfsv14.txt
> 
> You ran "zdb -l /dev/dsk/c7t5d0s2" which is not the same as
> "zdb -l /dev/dsk/c7t5d0p0" because of the default partitioning.
> In Solaris c*t*d*p* are fdisk partitions and c*t*d*s* are SMI or
> EFI slices. This why label 2&3 could not be found and can be
> part of the problem to start.

This is unlikely, as raidz vdev is reported as ONLINE, though you can use 
attached script to verify this.


Attachment: raidz_open2.d
Description: Binary data


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to