It's OK that you're missing labels 2 and 3 -- there are four copies
precisely so that you can afford to lose a few.  Labels 2 and 3
are at the end of the disk.  The fact that only they are missing
makes me wonder if someone resized the LUNs.  Growing them would
be OK, but shrinking them would indeed cause the pool to fail to
open (since part of it was amputated).

There ought to be more helpful diagnostics in the FMA error log.
After a failed attempt to import, type this:

# fmdump -ev

and let me know what it says.

Jeff

On Tue, Apr 29, 2008 at 03:31:53PM -0400, Krzys wrote:
> 
> 
> 
> I have a problem on one of my systems with zfs. I used to have zpool created 
> with 3 luns on SAN. I did not have to put any raid or anything on it since it 
> was already using raid on SAN. Anyway server rebooted and I cannot zee my 
> pools. 
> When I do try to import it it does fail. I am using EMC Clarion as SAN and 
> powerpath
> # zpool list
> no pools available
> # zpool import -f
>   pool: mypool
>   id: 4148251638983938048
> state: FAULTED
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
>   devices and try again.
>   see: http://www.sun.com/msg/ZFS-8000-3C
> config:
>   mypool UNAVAIL insufficient replicas
>   emcpower0a UNAVAIL cannot open
>   emcpower2a UNAVAIL cannot open
>   emcpower3a ONLINE
> 
> I think I am able to see all the luns and I should be able to access them on 
> my 
> sun box.
> # powermt display dev=all
> Pseudo name=emcpower0a
> CLARiiON ID=APM00070202835 [NRHAPP02]
> Logical device ID=6006016045201A001264FB20990FDC11 [LUN 13]
> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
> Owner: default=SP B, current=SP B
> ==============================================================================
> ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
> ### HW Path I/O Paths Interf. Mode State Q-IOs Errors
> ==============================================================================
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016041E035A4d0s0 SP A4 active 
> alive 0 0
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016941E035A4d0s0 SP B5 active 
> alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016141E035A4d0s0 SP A5 
> active alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016841E035A4d0s0 SP B4 
> active alive 0 0
> 
> 
> Pseudo name=emcpower1a
> CLARiiON ID=APM00070202835 [NRHAPP02]
> Logical device ID=6006016045201A004C1388343C10DC11 [LUN 14]
> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
> Owner: default=SP B, current=SP B
> ==============================================================================
> ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
> ### HW Path I/O Paths Interf. Mode State Q-IOs Errors
> ==============================================================================
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016041E035A4d1s0 SP A4 active 
> alive 0 0
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016941E035A4d1s0 SP B5 active 
> alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016141E035A4d1s0 SP A5 
> active alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016841E035A4d1s0 SP B4 
> active alive 0 0
> 
> 
> Pseudo name=emcpower3a
> CLARiiON ID=APM00070202835 [NRHAPP02]
> Logical device ID=6006016045201A00A82C68514E86DC11 [LUN 7]
> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
> Owner: default=SP B, current=SP B
> ==============================================================================
> ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
> ### HW Path I/O Paths Interf. Mode State Q-IOs Errors
> ==============================================================================
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016041E035A4d3s0 SP A4 active 
> alive 0 0
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016941E035A4d3s0 SP B5 active 
> alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016141E035A4d3s0 SP A5 
> active alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016841E035A4d3s0 SP B4 
> active alive 0 0
> 
> 
> Pseudo name=emcpower2a
> CLARiiON ID=APM00070202835 [NRHAPP02]
> Logical device ID=600601604B141B00C2F6DB2AC349DC11 [LUN 24]
> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
> Owner: default=SP B, current=SP B
> ==============================================================================
> ---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
> ### HW Path I/O Paths Interf. Mode State Q-IOs Errors
> ==============================================================================
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016041E035A4d2s0 SP A4 active 
> alive 0 0
> 3074 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0 c2t5006016941E035A4d2s0 SP B5 active 
> alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016141E035A4d2s0 SP A5 
> active alive 0 0
> 3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0 c3t5006016841E035A4d2s0 SP B4 
> active alive 0 0
> 
> 
> So format does show them as well
> bash-3.00# echo | format
> Searching for disks...done
> AVAILABLE DISK SELECTIONS:
>   0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
>   /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
> PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
>   1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
>   /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
> PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
>   2. c1t2d0 <SEAGATE-ST973401LSUN72G-0556-68.37GB>
>   /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
> PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
>   3. c1t3d0 <SEAGATE-ST973401LSUN72G-0556-68.37GB>
>   /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
> PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
>   4. c2t5006016941E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],0
>   5. c2t5006016041E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],0
>   6. c2t5006016941E035A4d1 <DGC-RAID5-0324 cyl 32766 alt 2 hd 64 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],1
>   7. c2t5006016041E035A4d1 <DGC-RAID5-0324 cyl 32766 alt 2 hd 64 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],1
>   8. c2t5006016041E035A4d2 <DGC-RAID5-0324 cyl 32766 alt 2 hd 256 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],2
>   9. c2t5006016941E035A4d2 <DGC-RAID5-0324 cyl 32766 alt 2 hd 256 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],2
>   10. c2t5006016041E035A4d3 <DGC-RAID5-0324 cyl 63998 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],3
>   11. c2t5006016941E035A4d3 <DGC-RAID5-0324 cyl 63998 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],3
>   12. c3t5006016841E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],0
>   13. c3t5006016141E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],0
>   14. c3t5006016141E035A4d1 <DGC-RAID5-0324 cyl 32766 alt 2 hd 64 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],1
>   15. c3t5006016841E035A4d1 <DGC-RAID5-0324 cyl 32766 alt 2 hd 64 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],1
>   16. c3t5006016141E035A4d2 <DGC-RAID5-0324 cyl 32766 alt 2 hd 256 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],2
>   17. c3t5006016841E035A4d2 <DGC-RAID5-0324 cyl 32766 alt 2 hd 256 sec 10>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],2
>   18. c3t5006016841E035A4d3 <DGC-RAID5-0324 cyl 63998 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],3
>   19. c3t5006016141E035A4d3 <DGC-RAID5-0324 cyl 63998 alt 2 hd 256 sec 16>
>   /[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
> PROTECTED],0/[EMAIL PROTECTED],3
>   20. emcpower0a <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
>   /pseudo/[EMAIL PROTECTED]
>   21. emcpower1a <DGC-RAID5-0324 cyl 32766 alt 2 hd 64 sec 10>
>   /pseudo/[EMAIL PROTECTED]
>   22. emcpower2a <DGC-RAID5-0324 cyl 32766 alt 2 hd 256 sec 10>
>   /pseudo/[EMAIL PROTECTED]
>   23. emcpower3a <DGC-RAID5-0324 cyl 63998 alt 2 hd 256 sec 16>
>   /pseudo/[EMAIL PROTECTED]
> Specify disk (enter its number): Specify disk (enter its number):
> 
> 
> 
> Now the fun part with troubleshooting this.
> 
> When I do zdb on emcpower3a which seems to be ok from zpool perspective I get 
> the following output:
> bash-3.00# zdb -lv /dev/dsk/emcpower3a
> --------------------------------------------
> LABEL 0
> --------------------------------------------
>   version=3
>   name='mypool'
>   state=0
>   txg=4367380
>   pool_guid=4148251638983938048
>   top_guid=9690155374174551757
>   guid=9690155374174551757
>   vdev_tree
>   type='disk'
>   id=2
>   guid=9690155374174551757
>   path='/dev/dsk/emcpower3a'
>   whole_disk=0
>   metaslab_array=1813
>   metaslab_shift=30
>   ashift=9
>   asize=134208815104
> --------------------------------------------
> LABEL 1
> --------------------------------------------
>   version=3
>   name='mypool'
>   state=0
>   txg=4367380
>   pool_guid=4148251638983938048
>   top_guid=9690155374174551757
>   guid=9690155374174551757
>   vdev_tree
>   type='disk'
>   id=2
>   guid=9690155374174551757
>   path='/dev/dsk/emcpower3a'
>   whole_disk=0
>   metaslab_array=1813
>   metaslab_shift=30
>   ashift=9
>   asize=134208815104
> --------------------------------------------
> LABEL 2
> --------------------------------------------
>   version=3
>   name='mypool'
>   state=0
>   txg=4367380
>   pool_guid=4148251638983938048
>   top_guid=9690155374174551757
>   guid=9690155374174551757
>   vdev_tree
>   type='disk'
>   id=2
>   guid=9690155374174551757
>   path='/dev/dsk/emcpower3a'
>   whole_disk=0
>   metaslab_array=1813
>   metaslab_shift=30
>   ashift=9
>   asize=134208815104
> --------------------------------------------
> LABEL 3
> --------------------------------------------
>   version=3
>   name='mypool'
>   state=0
>   txg=4367380
>   pool_guid=4148251638983938048
>   top_guid=9690155374174551757
>   guid=9690155374174551757
>   vdev_tree
>   type='disk'
>   id=2
>   guid=9690155374174551757
>   path='/dev/dsk/emcpower3a'
>   whole_disk=0
>   metaslab_array=1813
>   metaslab_shift=30
>   ashift=9
>   asize=134208815104
> 
> 
> 
> But when I do zdb on emcpower0a which seems to be not that ok and get the 
> following output:
> bash-3.00# zdb -lv /dev/dsk/emcpower0a
> --------------------------------------------
> LABEL 0
> --------------------------------------------
>   version=3
>   name='mypool'
>   state=0
>   txg=4367379
>   pool_guid=4148251638983938048
>   top_guid=14125143252243381576
>   guid=14125143252243381576
>   vdev_tree
>   type='disk'
>   id=0
>   guid=14125143252243381576
>   path='/dev/dsk/emcpower0a'
>   whole_disk=0
>   metaslab_array=13
>   metaslab_shift=29
>   ashift=9
>   asize=107365269504
>   DTL=727
> --------------------------------------------
> LABEL 1
> --------------------------------------------
>   version=3
>   name='mypool'
>   state=0
>   txg=4367379
>   pool_guid=4148251638983938048
>   top_guid=14125143252243381576
>   guid=14125143252243381576
>   vdev_tree
>   type='disk'
>   id=0
>   guid=14125143252243381576
>   path='/dev/dsk/emcpower0a'
>   whole_disk=0
>   metaslab_array=13
>   metaslab_shift=29
>   ashift=9
>   asize=107365269504
>   DTL=727
> --------------------------------------------
> LABEL 2
> --------------------------------------------
> failed to read label 2
> --------------------------------------------
> LABEL 3
> --------------------------------------------
> failed to read label 3
> 
> 
> that also is the same for emcpower2a in my pool.
> 
> Is there a way to be able to fix failed LABELs 2 and 3? I know you need 4 of 
> them, but is there a way to reconstruct them in any way? Or is my pool lost 
> completely and I need to recreate it? It would be off that reboot of a server 
> could cause such disaster. But I was unable to find anywhere where people 
> would 
> be able to repair or recreate those LABELS. How would I recover my zpools? 
> Any 
> help or suggestion is greatly appreciated.
> 
> Regards,
> 
> Chris
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to