Yesterday I was able to import zpool with missing log device using "zpool import -f -m myzpool" command. I had to boot from Oracle Solaris Express Live CD. Then I just did "zpool remove myzpool logdevice" That's it. Now I've got my pool back with all the data and with ONLINE status. I had my zpool (with 8 x 500 GB disks) sitting for almost 6 months unavailable. This was my Christmas present!
Best regards, Dmitry Office phone: 905.625.6471 ext. 104 Cell phone: 416.529.1627 -----Original Message----- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Doyle Sent: Sunday, August 01, 2010 1:40 AM To: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review] A solution to this problem would be my early Christmas present! Here is how I lost access to an otherwise healthy mirrored pool two months ago: Box running snv_130 with two disks in a mirror and an iRAM battery-backed ZIL device was shutdown orderly and powered down normally. While I was away on travel, the PSU in the PC died while in its lowest-power standby state - this caused the Li battery in the iRAM to discharge and all of the SLOG contents in the DRAM went poof. Powered box back up... zpool import -f tank failed to bring the pool back online. After much research, I found the 'logfix' tool, got it compile on another snv_122 box and followed the directions to synthesize a "forged" log device header using the guid of the original device extracted from vdev list. This failed to work despite the binary tool running and some inspection of the guids using zdb -l spoofed_new_logdev. What's intrigueing is that zpool is not even properly reporting the 'missing device'. See the output below from zpool, then zdb - notice that zdb shows the remnants of a vdev for a log device but with guid = 0 ???? # zpool import pool: tank id: 6218740473633775200 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-6X config: [b] tank UNAVAIL missing device mirror-0 ONLINE c0t1d0 ONLINE c0t2d0 ONLINE [/b] Additional devices are known to be part of this pool, though their # zdb -e tank Configuration for import: vdev_children: 2 version: 22 pool_guid: 6218740473633775200 name: 'tank' state: 0 hostid: 9271202 hostname: 'eon' vdev_tree: type: 'root' id: 0 guid: 6218740473633775200 children[0]: type: 'mirror' id: 0 guid: 5245507142600321917 metaslab_array: 23 metaslab_shift: 33 ashift: 9 asize: 1000188936192 is_log: 0 children[0]: type: 'disk' id: 0 guid: 15634594394239615149 phys_path: '/p...@0,0/pci1458,b...@11/d...@2,0:a' whole_disk: 1 DTL: 55 path: '/dev/dsk/c0t1d0s0' devid: 'id1,s...@sata_____st31000333as________________9te1jx8c/a' children[1]: type: 'disk' id: 1 guid: 3144903288495510072 phys_path: '/p...@0,0/pci1458,b...@11/d...@1,0:a' whole_disk: 1 DTL: 54 path: '/dev/dsk/c0t2d0s0' devid: 'id1,s...@sata_____st31000528as________________9vp2kwam/a' [b] children[1]: type: 'missing' id: 1 guid: 0 [/b] -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss