I have a snv_52 server that I want to upgrade to the latest, either via a
non-debug build or a simple fresh install.  I don't know which yet as I
have not decided.

I have a pile of disks hanging off it on two controllers, c0 and c1.

The disks on c1 are in a zpool thus :

bash-3.1$ zpool status
  pool: zfs0
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zfs0        ONLINE       0     0     0
          c1t9d0    ONLINE       0     0     0
          c1t10d0   ONLINE       0     0     0
          c1t11d0   ONLINE       0     0     0
          c1t12d0   ONLINE       0     0     0
          c1t13d0   ONLINE       0     0     0
          c1t14d0   ONLINE       0     0     0

errors: No known data errors
bash-3.1$ zpool iostat -v zfs0 15 4
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zfs0         182G  20.7G      0     19  14.7K  1.65M
  c1t9d0    30.3G  3.46G      0      3  2.44K   281K
  c1t10d0   30.3G  3.46G      0      3  2.47K   281K
  c1t11d0   30.3G  3.46G      0      3  2.43K   281K
  c1t12d0   30.3G  3.46G      0      3  2.45K   280K
  c1t13d0   30.3G  3.46G      0      3  2.49K   281K
  c1t14d0   30.3G  3.46G      0      3  2.43K   281K
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zfs0         182G  20.6G      0     97  63.9K  10.4M
  c1t9d0    30.3G  3.44G      0     13      0  1.71M
  c1t10d0   30.3G  3.44G      0     13  8.52K  1.71M
  c1t11d0   30.3G  3.44G      0     14  8.52K  1.73M
  c1t12d0   30.3G  3.44G      0     19  12.8K  1.74M
  c1t13d0   30.3G  3.44G      0     19  25.6K  1.74M
  c1t14d0   30.3G  3.44G      0     16  8.52K  1.74M
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zfs0         182G  20.4G      1    120  85.5K  11.7M
  c1t9d0    30.3G  3.41G      0     17  13.0K  1.95M
  c1t10d0   30.3G  3.41G      0     18  21.3K  1.97M
  c1t11d0   30.3G  3.41G      0     23  21.3K  1.96M
  c1t12d0   30.3G  3.41G      0     21  12.8K  1.95M
  c1t13d0   30.3G  3.41G      0     21  12.8K  1.97M
  c1t14d0   30.3G  3.40G      0     18  4.26K  1.94M
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zfs0         182G  20.3G      0    110  38.4K  10.4M
  c1t9d0    30.4G  3.38G      0     18  12.8K  1.74M
  c1t10d0   30.4G  3.38G      0     17  4.26K  1.75M
  c1t11d0   30.4G  3.38G      0     18      0  1.74M
  c1t12d0   30.4G  3.38G      0     17  8.53K  1.71M
  c1t13d0   30.4G  3.38G      0     18      0  1.70M
  c1t14d0   30.4G  3.38G      0     20  12.8K  1.77M
----------  -----  -----  -----  -----  -----  -----

bash-3.1$ uname -a
SunOS mars 5.11 snv_52 sun4u sparc SUNW,Ultra-2

Note the complete lack of redundency !

Now then, I have a collection of six disks on controller c0 that I would
like to now mirror with this ZPool zfs0.  Thats the wrong way of thinking
really. In the SVM world I would create stripes and then mirror them to get
either RAID 0+1 or RAID 1+0 depending on various factors.  With ZFS I am
more likely to just create the mirrors on day one thus :

# zpool create zfs0 mirror c1t9d0 c0t9d0 mirror c1t10d0 c0t10d0 ... etc

but I don't have that option now.  The zpool exists as a simple stripe set
at the moment.  Or some similar analogy of a stripe set in the ZFS world.

Now zpool(1M) says the following for either "add" or "attach" :

SunOS 5.10          Last change: 31 Jul 2006                    6
System Administration Commands                          zpool(1M)


     zpool add [-fn] pool vdev ...

         Adds the specified virtual devices to  the  given  pool.
         The vdev specification is described in the "Virtual Dev-
         ices" section. The behavior of the -f  option,  and  the
         device  checks  performed  are  described  in the "zpool
         create" subcommand.

         -f       Forces use of vdevs, even if they appear in use
                  or specify a conflicting replication level. Not
                  all devices can be overridden in this manner.

         -n       Displays the configuration that would  be  used
                  without  actually  adding the vdevs. The actual
                  pool creation can still fail  due  to  insuffi-
                  cient privileges or device sharing.


     zpool attach [-f] pool device new_device

         Attaches new_device to an  existing  zpool  device.  The
         existing device cannot be part of a raidz configuration.
         If device is not currently part of a mirrored configura-
         tion,  device  automatically  transforms  into a two-way
         mirror of device and new_device.  If device is part of a
         two-way mirror, attaching new_device creates a three-way
         mirror, and so on. In either case, new_device begins  to
         resilver immediately.

         -f       Forces use of new_device, even if  its  appears
                  to be in use. Not all devices can be overridden
                  in this manner.


Note that "attach" has no option for -n which would just show me the damage
I am about to do :-(

So I am making a best guess here that what I need is something like this :

# zpool attach zfs0 c1t9d0 c0t9d0

which would mean that the fist disk in my zpool would be mirrored and
nothing else.  A weird config to be sure but .. is this what will happen?

I ask all this in painful boring detail because I have no way to backup this
zpool other than tar to a DLT.  The last thing I want to do is destroy my
data when I am trying to add redundency.

Any thoughts ?

-- 
Dennis Clarke

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to