Victor Latushkin wrote:
On Jun 4, 2010, at 5:01 PM, Sigbjørn Lie wrote:

R. Eulenberg wrote:
Sorry for reviving this old thread.

I even have this problem on my (productive) backup server. I lost my system-hdd 
and my separate ZIL-device while the system crashs and now I'm in trouble. The 
old system was running under the least version of osol/dev (snv_134) with zfs 
v22. After the server crashs I was very optimistic of solving the problems the 
same day. It's a long time ago.
I was setting up a new systen (osol 2009.06 and updating to the lastest version 
of osol/dev - snv_134 - with deduplication) and then I tried to import my 
backup zpool, but it does not work.

# zpool import
 pool: tank1
   id: 5048704328421749681
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
  see: http://www.sun.com/msg/ZFS-8000-EY
config:

       tank1        UNAVAIL  missing device
         raidz2-0   ONLINE
           c7t5d0   ONLINE
           c7t0d0   ONLINE
           c7t6d0   ONLINE
           c7t3d0   ONLINE
           c7t1d0   ONLINE
           c7t4d0   ONLINE
           c7t2d0   ONLINE

# zpool import -f tank1
cannot import 'tank1': one or more devices is currently unavailable
       Destroy and re-create the pool from
       a backup source

Any other option (-F, -X, -V, -D) and any combination of them doesn't helps too.
I can not add / attach / detach / remove a vdev and the ZIL-device either, 
because the system tells me: there is no zpool 'tank1'.
In the last ten days I read a lot of threads, guides to solve problems and best 
practice documentations with ZFS and so on, but I do not found a solution for 
my problem. I created a fake-zpool with separate ZIL-device to combine the new 
ZIL-file with my old zpool for importing them, but it doesn't work in course of 
the different GUID and checksum (the name I was modifiing by an binary editor).
The output of:
e...@opensolaris:~# zdb -e tank1

Configuration for import:
       vdev_children: 2
       version: 22
       pool_guid: 5048704328421749681
       name: 'tank1'
       state: 0
       hostid: 946038
       hostname: 'opensolaris'
       vdev_tree:
           type: 'root'
           id: 0
           guid: 5048704328421749681
           children[0]:
               type: 'raidz'
               id: 0
               guid: 16723866123388081610
               nparity: 2
               metaslab_array: 23
               metaslab_shift: 30
               ashift: 9
               asize: 7001340903424
               is_log: 0
               create_txg: 4
               children[0]:
                   type: 'disk'
                   id: 0
                   guid: 6858138566678362598
                   phys_path: 
'/p...@0,0/pci8086,2...@1e/pci11ab,1...@9/d...@0,0:a'
                   whole_disk: 1
                   DTL: 4345
                   create_txg: 4
                   path: '/dev/dsk/c7t5d0s0'
                   devid: 
'id1,s...@sata_____samsung_hd103uj_______s13pj1bq709050/a'
               children[1]:
                   type: 'disk'
                   id: 1
                   guid: 16136237447458434520
                   phys_path: 
'/p...@0,0/pci8086,2...@1e/pci11ab,1...@9/d...@1,0:a'
                   whole_disk: 1
                   DTL: 4344
                   create_txg: 4
                   path: '/dev/dsk/c7t0d0s0'
                   devid: 
'id1,s...@sata_____samsung_hd103uj_______s13pjdwq317311/a'
               children[2]:
                   type: 'disk'
                   id: 2
                   guid: 10876853602231471126
                   phys_path: 
'/p...@0,0/pci8086,2...@1e/pci11ab,1...@9/d...@2,0:a'
                   whole_disk: 1
                   DTL: 4343
                   create_txg: 4
                   path: '/dev/dsk/c7t6d0s0'
                   devid: 
'id1,s...@sata_____hitachi_hdt72101______stf604mh14s56w/a'
               children[3]:
                   type: 'disk'
                   id: 3
                   guid: 2384677379114262201
                   phys_path: 
'/p...@0,0/pci8086,2...@1e/pci11ab,1...@9/d...@3,0:a'
                   whole_disk: 1
                   DTL: 4342
                   create_txg: 4
                   path: '/dev/dsk/c7t3d0s0'
                   devid: 
'id1,s...@sata_____samsung_hd103uj_______s13pj1nq811135/a'
               children[4]:
                   type: 'disk'
                   id: 4
                   guid: 15143849195434333247
                   phys_path: 
'/p...@0,0/pci8086,2...@1e/pci11ab,1...@9/d...@4,0:a'
                   whole_disk: 1
                   DTL: 4341
                   create_txg: 4
                   path: '/dev/dsk/c7t1d0s0'
                   devid: 
'id1,s...@sata_____hitachi_hdt72101______stf604mh16v73w/a'
               children[5]:
                   type: 'disk'
                   id: 5
                   guid: 11627603446133164653
                   phys_path: 
'/p...@0,0/pci8086,2...@1e/pci11ab,1...@9/d...@5,0:a'
                   whole_disk: 1
                   DTL: 4340
                   create_txg: 4
                   path: '/dev/dsk/c7t4d0s0'
                   devid: 
'id1,s...@sata_____samsung_hd103uj_______s13pjdwq317308/a'
               children[6]:
                   type: 'disk'
                   id: 6
                   guid: 15036924286456611863
                   phys_path: 
'/p...@0,0/pci8086,2...@1e/pci11ab,1...@9/d...@6,0:a'
                   whole_disk: 1
                   DTL: 4338
                   create_txg: 4
           path: '/dev/dsk/c7t2d0s0'
                   devid: 
'id1,s...@sata_____hitachi_hds72101______jp2921hq0kmeza/a'
           children[1]:
               type: 'missing'
               id: 1
               guid: 0

doesn't gave me the GUID of the old ZIL-device and ends without a prompt (the 
prozess hangs up).

Now I have add a ZIL-device, same as the old device, to the fake-zpool, export them and tried to compile logfix, but that fails too. e...@opensolaris:~/Downloads/logfix# make
make: Fatal error in reader: Makefile, line 9: Unexpected end of line seen

I'm just a (advanced) user, not a developer, so I can't handle with 
C-Sourcecode and my knowlegde about osol system is just lousy.

I need some help, please!
Thanks for any replies.

Best regards Ron
Hi,

I just recovered from a very similar zfs crash. What I did was:

Added the following to /etc/system. This apparently sets zdb into write mode.

set /zfs/:zfs_recover=/1/
set aok=/1/

This is not going to help in this case. Btw, before applying these parameters, 
it is good to make sure that you fully understand why it is needed and what 
consequences may be.

Then ran the following command:
zdb -e -bcsvL <zpool-name>

Regards,
Sigbjorn

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


That's a valid point. Hence why I started the thread about ZFS recovery documentation. Or maybe I have missed out on some info like what David pointed out for me ?




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to