On Mon, 2007-02-26 at 07:00 -0800, dudekula mastan wrote:
> 
> Hi All,
>  
> I have a zpool (name as testpool) on /dev/dsk/c0t0d0. 
>  
> The command $zpool import testpool, imports the testpool (means mount
> the testpool).
>  
> How the import command comes to know testpool created
> on /dev/dsk/c0t0d0 ?
>  
> And also the  command $zpool import, list out all the zpools which we
> can import, How it list our them ?

http://cvs.opensolaris.org/source/xref/clearview/usr/src/uts/common/fs/zfs/vdev_label.c


     32  * The vdev label serves several distinct purposes:
     33  *
     34  *      1. Uniquely identify this device as part of a ZFS pool and 
confirm its
     35  *         identity within the pool.
     36  *
     37  *      2. Verify that all the devices given in a configuration are 
present
     38  *         within the pool.
     39  *
     40  *      3. Determine the uberblock for the pool.
     41  *
     42  *      4. In case of an import operation, determine the configuration 
of the
     43  *         toplevel vdev of which it is a part.
     44  *
     45  *      5. If an import operation cannot find all the devices in the 
pool,
     46  *         provide enough information to the administrator to determine 
which
     47  *         devices are missing.
     48  *
     49  * It is important to note that while the kernel is responsible for 
writing the
     50  * label, it only consumes the information in the first three cases.  
The
     51  * latter information is only consumed in userland when determining the
     52  * configuration to import a pool
[...]

     99  * On-disk Format
    100  * --------------
    101  *
    102  * The vdev label consists of two distinct parts, and is wrapped within 
the
    103  * vdev_label_t structure.  The label includes 8k of padding to permit 
legacy
    104  * VTOC disk labels, but is otherwise ignored.
    105  *
    106  * The first half of the label is a packed nvlist which contains pool 
wide
    107  * properties, per-vdev properties, and configuration information.  It 
is
    108  * described in more detail below.
    109  *
    110  * The latter half of the label consists of a redundant array of 
uberblocks.
    111  * These uberblocks are updated whenever a transaction group is 
committed,
    112  * or when the configuration is updated.  When a pool is loaded, we 
scan each
    113  * vdev for the 'best' uberblock.
    114  *
    115  *
    116  * Configuration Information
    117  * -------------------------
    118  *
    119  * The nvlist describing the pool and vdev contains the following 
elements:
    120  *
    121  *      version         ZFS on-disk version
    122  *      name            Pool name
    123  *      state           Pool state
    124  *      txg             Transaction group in which this label was 
written
    125  *      pool_guid       Unique identifier for this pool
    126  *      vdev_tree       An nvlist describing vdev tree.
    127  *
    128  * Each leaf device label also contains the following:
    129  *
    130  *      top_guid        Unique ID for top-level vdev in which this is 
contained
    131  *      guid            Unique ID for the leaf vdev


Each disk that has been part of a zpool at some point has a vdev. zfs
import scans all devices that are seen by format or rmformat if there is
a vdev.

Francois

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to