I'm not actually issuing any when starting up the new instance. None are 
needed; the instance is booted from an image which has the zpool configuration 
stored within, so simply starts and sees that the devices aren't available, 
which become available after I've attached the EBS device.

Before the image was bundled the following zpool commands were issued with the 
EBS volumes attached at "10" (primary), "6" (log main) and "7" (log mirror):

# zpool create foo c7d16 log mirror c7d6 c7d7
# zpool status
  pool: mnt
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mnt         ONLINE       0     0     0
          c7d1p0    ONLINE       0     0     0
          c7d2p0    ONLINE       0     0     0

errors: No known data errors

  pool: foo
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        foo         ONLINE       0     0     0
          c7d16     ONLINE       0     0     0
        logs        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c7d6    ONLINE       0     0     0
            c7d7    ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c7d0s0    ONLINE       0     0     0

errors: No known data errors

After booting a new instance based on the image I see this:

# zpool status

  pool: foo
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        foo         UNAVAIL      0     0     0  insufficient replicas
          c7d16     UNAVAIL      0     0     0  cannot open
        logs        UNAVAIL      0     0     0  insufficient replicas
          mirror    UNAVAIL      0     0     0  insufficient replicas
            c7d6    UNAVAIL      0     0     0  cannot open
            c7d7    UNAVAIL      0     0     0  cannot open

Which changes to "ONLINE" (as previous) when the EBS volumes are attached.

After reading through the documentation a little more, could this be due to the 
zpool.cache file being stored on the image (& therefore refreshed after each 
boot) rather than somewhere more persistent?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to