Darren J Moffat wrote:
Dave Lowenstein wrote:
Nope, doesn't work.

Try presenting one of those lun snapshots to your host, run cfgadm - al,
then run zpool import.


#zpool import
no pools available to import

Does format(1M) see the luns ?  If format(1M) can't see them it is
unlikely that ZFS will either.

It would make my life so much simpler if you could do something like
this: zpool import --import-as yourpool.backup yourpool

     zpool import [-o mntopts] [ -o property=value] ... [-d dir |
     -c cachefile] [-D] [-f] [-R root] pool | id [newpool]

         Imports a specific pool. A pool can be identified by its
         name or the numeric identifier. If newpool is specified,
         the pool is imported using the name newpool.  Otherwise,
         it is imported with the same name as its exported name.

Given that the pool is snapshot of one or more vdevs in an existing ZFS storage pool, not only is the "name" identical, so it is the "numeric identifier". If can be determined that when using "zpool import ....", duplicates are suppressed, even if those duplicates are entirely separate vdevs containing block-based snapshots, physical copies, remote mirrors or iSCSI Targets.

The steps to reproduce this behavior on a single node, using files and stand Solaris utilities is as follows:

# mkfile 500m /var/tmp/pool_file
# zpool create pool /var/tmp/pool_file
# zpool status pool
  pool: pool
 state: ONLINE
 scrub: none requested
config:

        NAME                  STATE    READ WRITE CKSUM
        pool                  ONLINE      0     0     0
          /var/tmp/pool_file  ONLINE      0     0     0

errors: No known data errors

# zpool export pool
# dd if=/var/tmp/pool_file of=/var/tmp/pool_snapshot
  { wait, wait, wait, ... more on this later ...}
1024000+0 records in
1024000+0 records out
# zpool import -d /var/tmp
  pool: pool
    id: 14424098069460077054
 state: ONLINE
action: The pool can be imported using its name or numeric identified
config:

        pool                  ONLINE
          /var/tmp/pool_file  ONLINE

Question: What happened to the other ZFS storage pool call pool_snapshot?

Answer: Its presence is suppressed by zpool import. If one was to rename /var/tmp/pool_file to some other directory, the /var/tmp/ pool_snapshot will now appear.

# mv /var/tmp/pool_file /var/pool_file
# zpool import -d /var/tmp
  pool: pool
    id: 14424098069460077054
 state: ONLINE
action: The pool can be imported using its name or numeric identified
config:

        pool                      ONLINE
          /var/tmp/pool_snapshot  ONLINE      0     0     0

At this point, if one was to go ahead with the import of pool, (which would work) then rename /var/pool_file back to /var/tmp/pool_file, its presence would now be suppressed. Conversely, if the rename was done first, then a zpool import was attempted, again only one storage pool would exists at any given time.

Clearly there is some explicit suppressing of duplicate storage pools going on here. Browsing the ZFS code looking for answer, the logic surrounding zfs_inuse(), seem to cause this behavior, expected or not.

        http://cvs.opensolaris.org/source/search?q=vdev_inuse&project=%2Fonnv

=========

As mentioned earlier, the {wait, wait, wait, ...} can be eliminated by using Availability Suite Point-in-Time Copy, by itself, or in combination with Availability Suite Remote Copy or iSCSI Target, all of which are present in OpenSolaris today, and all are much fast then the dd utility.

As one that supports both Availability Suite and iSCSI Target, not suppressing duplicate pool names and pool identifiers, in combination with a rename on import, "zpool import -new <name> ...", would provide a means to support various copies, or nearly identical copies of a ZFS storage pool on the same Solaris host.

While browsing the ZFS source code, I noticed that "usr/src/cmd/ztest/ ztest.c", includes ztest_spa_rename(), a ZFS test which renames a ZFS storage pool to a different name, tests the pool under its new name, and then renames it back. I wonder why this functionality was not exposed as part of zpool support?

- Jim


# zpool import foopool barpool



--
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
work: 781.442.4042
cell:   603-724-3972

http://blogs.sun.com/avs
http://www.opensolaris.org/os/project/avs/
http://www.opensolaris.org/os/project/iscsitgt/
http://www.opensolaris.org/os/community/storage/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to