Take a look at ZDB (the "ZFS Debugger"). Probably the "zdb -l"
(label dump) option would suffice for your task, i.e.:

# zdb -l /dev/dsk/c4t1d0s0 | egrep 'host|uid|name|devid|path'
    name: 'rpool'
    pool_guid: 12076177533503245216
    hostid: 13583512
    hostname: 'bofh-sol'
    top_guid: 1792815639752064612
    guid: 1792815639752064612
        guid: 1792815639752064612
        path: '/dev/dsk/c4t1d0s0'
        devid: 'id1,sd@SATA_____ST3808110AS_________________5LR557KB/a'
        phys_path: '/pci@0,0/pci8086,2847@1c,4/pci1043,81e4@0/disk@1,0:a'

(repeats for all 4 labels, values should match for a healthy pool)

2012-01-12 18:51, adele....@oracle.com wrote:
Hi all,

My cu has following question.


Assume I have allocated a LUN from external storage to two hosts ( by
mistake ). I create a zpool with this LUN on host1 with no errors. On
host2 when I try to create a zpool by
using the same disk ( which is allocated to host2 as well ), zpool
create - comes back with an error saying " /dev/dsk/cXtXdX is part of
exported or potentially active ZFS pool test".
Is there a way for me to check what zpool disk belongs to from 'host2'.
Do the disks in a zpool have a private region that I can read to get a
zpool name or id?


Steps required to reproduce the problem
====================================
Disk doubly allocated to host1, host2
host1 sees disk as disk100
host2 sees disk as disk101
host1# zpool create host1_pool disk1 disk2 disk100
returns success ( as expected )
host2# zpool create host2_pool disk1 disk2 disk101
invalid dev specification
use '-f' to overrite the following errors:
/dev/dsk/disk101 is part of exported or potentially active ZFS pool
test. Please see zpool

zpool did catch that the disk is part of an active pool, but since it's
not on the same host I am not getting the name of the pool to which
disk101 is allocated. It's possible we might go ahead and use '-f'
option to create the zpool and start using this filesystem. By doing
this we're potentially destroying filesystems on host1, host2 which
could lead to severe downtime.

Any way to get the pool name to which the disk101 is assigned ( with a
different name on a different host )? This would aid us tremendously in
avoiding a potential issue. This has happened with Solaris 9, UFS once
before taking out two Solaris machines.

What happens if diskis assigned to a AIX box and is setup as part of a
Volume manager on AIX and we try to create 'zpool' on Solaris host. Will
ZFS catch this, by saying something wrong with the disk?

Regards,
Adele

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
HTH,
//Jim Klimov
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to