SUMMARY :

    I attach a new disk device to an existing mirror set.
    zpool iostat poolname 5 does not report write bandwidth data
    zpool iostat -v poolname 5 reports read and write data.

    also seen, sometimes the output for bandwidth is non-zero but
    has no units [ B, KB, MB, etc ].

I have a mirrored pool thus :

# zpool status
  pool: phobos_rpool
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        phobos_rpool  ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0

I attach another device :

# zpool attach phobos_rpool c1t0d0s0 c0t2d0
# zpool status
  pool: phobos_rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.42% done, 0h35m to go
config:

        NAME          STATE     READ WRITE CKSUM
        phobos_rpool  ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c0t2d0    ONLINE       0     0     0

The output from zpool status seems correct :

# zpool status
  pool: phobos_rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h7m, 37.16% done, 0h11m to go
config:

        NAME          STATE     READ WRITE CKSUM
        phobos_rpool  ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c0t2d0    ONLINE       0     0     0

Also, verbose iostat data seems correct :

# zpool iostat -v phobos_rpool 5
                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    225     35  6.56M  51.6K
  mirror      16.2G  17.5G    225     35  6.56M  51.6K
    c1t0d0s0      -      -     68     19  4.70M  52.8K
    c1t1d0s0      -      -     36     21  2.34M  52.8K
    c0t2d0        -      -      0    294    216  14.0M
------------  -----  -----  -----  -----  -----  -----

                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    478     58  13.6M  40.4K
  mirror      16.2G  17.5G    478     58  13.6M  40.4K
    c1t0d0s0      -      -    144     36  8.71M  40.4K
    c1t1d0s0      -      -    107     40  5.35M  40.3K
    c0t2d0        -      -      0    317      0  13.6M
------------  -----  -----  -----  -----  -----  -----

                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    341     69  18.9M  44.4K
  mirror      16.2G  17.5G    341     69  18.9M  44.4K
    c1t0d0s0      -      -    151     24  13.2M  44.4K
    c1t1d0s0      -      -     72     29  5.99M  44.4K
    c0t2d0        -      -      0    237      0  18.9M
------------  -----  -----  -----  -----  -----  -----

                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    595     85  13.1M  48.7K
  mirror      16.2G  17.5G    595     85  13.1M  48.7K
    c1t0d0s0      -      -    215     36  11.6M  50.3K
    c1t1d0s0      -      -     98     52  6.26M  50.3K
    c0t2d0        -      -      0    383      0  13.1M
------------  -----  -----  -----  -----  -----  -----

                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    669    126  15.8M  76.5K
  mirror      16.2G  17.5G    669    126  15.8M  76.5K
    c1t0d0s0      -      -    194     53  10.5M  75.5K
    c1t1d0s0      -      -    102     68  5.34M  76.4K
    c0t2d0        -      -      0    368      0  15.8M
------------  -----  -----  -----  -----  -----  -----

                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    260     41  16.5M  28.0K
  mirror      16.2G  17.5G    260     41  16.5M  28.0K
    c1t0d0s0      -      -    161     17  12.1M  29.8K
    c1t1d0s0      -      -     56     29  5.16M  28.7K
    c0t2d0        -      -      0    222      0  16.4M
------------  -----  -----  -----  -----  -----  -----

^C#

Non-verbose iostat data shows no ( near zero ) write bandwidth :

# zpool iostat phobos_rpool 5
                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    233     36  6.98M  51.0K
phobos_rpool  16.2G  17.5G    202     10  18.6M  12.2K
phobos_rpool  16.2G  17.5G    212     15  15.5M  14.6K
phobos_rpool  16.2G  17.5G    274     43  15.5M  36.9K
phobos_rpool  16.2G  17.5G    250     24  21.1M  22.7K
phobos_rpool  16.2G  17.5G    189     15  16.8M  14.9K
phobos_rpool  16.2G  17.5G    205     21  16.8M  18.5K
^C#

I also note that the verbose output reports often show no units for read
bandwidth on the new device :

# zpool iostat -v phobos_rpool 5
                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
phobos_rpool  16.2G  17.5G    375     52  8.60M  74.7K
  mirror      16.2G  17.5G    375     52  8.60M  74.7K
    c1t0d0s0      -      -    112     29  6.21M  75.5K
    c1t1d0s0      -      -     59     32  3.10M  75.5K
    c0t2d0        -      -      0    343    104  13.3M
------------  -----  -----  -----  -----  -----  -----

See the 104 in the last row. That may be bytes, KB, or MB. That may be
documented somewhere but I suspect it is not just bytes.

Sorry if I am being nit-picky but I thought that this data would be in the
kstat chain and the per-device data would be summed up for the non-verbose
report. It looks like the write traffic to the new device is being ignored
in the non-verbose output data.


-- 
Dennis Clarke

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to