On Apr 25, 2011, at 2:52 PM, Brandon High wrote:

> I'm in the process of replacing drive in a pool, and the resilver
> times seem to have increased with each device. The way that I'm doing
> this is by pulling a drive, physically replacing it, then doing
> 'cfgadm -c configure ____ ; zpool replace tank ____'. I don't have any
> hot-swap bays available, so I'm physically replacing the device before
> doing a 'zpool replace'.
> 
> I'm replacing Western Digital WD10EADS 1TB drives with Hitachi 5K3000
> 3TB drives. Neither device is fast, but they aren't THAT slow. wsvc_t
> and asvc_t both look fairly healthy giving the device types.

Look for 10-12 ms for asvc_t.  In my experience, SATA disks tend to not 
handle NCQ as well as SCSI disks handle TCQ -- go figure. In your iostats
below, you are obviously not bottlenecking on the disks.

> 
> Replacing the first device (took about 20 hours) went about as
> expected. The second took about 44 hours. The third is still running
> and should finish in slightly over 48 hours.

If there is other work going on, then you might be hitting the resilver
throttle. By default, it will delay 2 clock ticks, if needed. It can be turned 
off temporarily using:
        echo zfs_resilver_delay/W0t0 | mdb -kw

to return to normal:
        echo zfs_resilver_delay/W0t2 | mdb -kw

> I'm wondering if the following would help for the next drive:
> # zpool offline tank c2t4d0
> # cfgadm -c unconfigure sata3/4::dsk/c2t4d0
> 
> At this point pull the drive and put it into an external USB adapter.
> Put the new drive in the hot-swap bay. The USB adapter shows up as
> c4t0d0.
> 
> # zpool online tank c4t0d0
> 
> This should re-add it to the pool and resilver the last few
> transactions that may have been missed, right?
> 
> Then I want to actually replace the drive in the zpool:
> # cfgadm -c configure sata3/4
> # zpool replace tank c4t0d0 c2t4d0
> 
> Will this work? Will the replace go faster, since it won't need to
> resilver from the parity data?

Probably won't work because it does not make the resilvering drive
any faster.
 -- richard

> 
> 
> $ zpool list tank
> NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> tank  7.25T  6.40T   867G    88%  1.11x  DEGRADED  -
> $ zpool status -x
>  pool: tank
> state: DEGRADED
> status: One or more devices is currently being resilvered.  The pool will
>        continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
> scan: resilver in progress since Sat Apr 23 17:03:13 2011
>    5.91T scanned out of 6.40T at 38.0M/s, 3h42m to go
>    752G resilvered, 92.43% done
> config:
> 
>        NAME              STATE     READ WRITE CKSUM
>        tank              DEGRADED     0     0     0
>          raidz2-0        DEGRADED     0     0     0
>            c2t0d0        ONLINE       0     0     0
>            c2t1d0        ONLINE       0     0     0
>            c2t2d0        ONLINE       0     0     0
>            c2t3d0        ONLINE       0     0     0
>            c2t4d0        ONLINE       0     0     0
>            replacing-5   DEGRADED     0     0     0
>              c2t5d0/old  FAULTED      0     0     0  corrupted data
>              c2t5d0      ONLINE       0     0     0  (resilvering)
>            c2t6d0        ONLINE       0     0     0
>            c2t7d0        ONLINE       0     0     0
> 
> errors: No known data errors
> $ zpool iostat -v tank 60 3
>                     capacity     operations    bandwidth
> pool              alloc   free   read  write   read  write
> ----------------  -----  -----  -----  -----  -----  -----
> tank              6.40T   867G    566     25  32.2M   156K
>  raidz2          6.40T   867G    566     25  32.2M   156K
>    c2t0d0            -      -    362     11  5.56M  71.6K
>    c2t1d0            -      -    365     11  5.56M  71.6K
>    c2t2d0            -      -    363     11  5.56M  71.6K
>    c2t3d0            -      -    363     11  5.56M  71.6K
>    c2t4d0            -      -    361     11  5.54M  71.6K
>    replacing         -      -      0    492  8.28K  4.79M
>      c2t5d0/old      -      -    202      5  2.84M  36.7K
>      c2t5d0          -      -      0    315  8.66K  4.78M
>    c2t6d0            -      -    170    190  2.68M  2.69M
>    c2t7d0            -      -    386     10  5.53M  71.6K
> ----------------  -----  -----  -----  -----  -----  -----
> 
>                     capacity     operations    bandwidth
> pool              alloc   free   read  write   read  write
> ----------------  -----  -----  -----  -----  -----  -----
> tank              6.40T   867G    612     14  8.43M  70.7K
>  raidz2          6.40T   867G    612     14  8.43M  70.7K
>    c2t0d0            -      -    411     11  1.51M  57.9K
>    c2t1d0            -      -    414     11  1.50M  58.0K
>    c2t2d0            -      -    385     11  1.51M  57.9K
>    c2t3d0            -      -    412     11  1.50M  58.0K
>    c2t4d0            -      -    412     11  1.45M  57.8K
>    replacing         -      -      0    574    366   852K
>      c2t5d0/old      -      -      0      0      0      0
>      c2t5d0          -      -      0    324    366   852K
>    c2t6d0            -      -    427     11  1.45M  57.8K
>    c2t7d0            -      -    431     11  1.49M  57.9K
> ----------------  -----  -----  -----  -----  -----  -----
> 
>                     capacity     operations    bandwidth
> pool              alloc   free   read  write   read  write
> ----------------  -----  -----  -----  -----  -----  -----
> tank              6.40T   867G  1.02K     12  11.1M  69.4K
>  raidz2          6.40T   867G  1.02K     12  11.1M  69.4K
>    c2t0d0            -      -    772     10  1.99M  59.3K
>    c2t1d0            -      -    771     10  1.99M  59.4K
>    c2t2d0            -      -    743     10  2.02M  59.4K
>    c2t3d0            -      -    771     11  2.01M  59.3K
>    c2t4d0            -      -    767     10  1.94M  59.1K
>    replacing         -      -      0  1.00K     17  1.48M
>      c2t5d0/old      -      -      0      0      0      0
>      c2t5d0          -      -      0    533     17  1.48M
>    c2t6d0            -      -    791     10  1.98M  59.2K
>    c2t7d0            -      -    796     10  1.99M  59.3K
> ----------------  -----  -----  -----  -----  -----  -----
> 
> $ iostat -xn 60 3
>                    extended device statistics
>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>  362.4   11.5 5693.9   71.6  0.7  0.7    2.0    2.0  14  30 c2t0d0
>  365.3   11.5 5689.0   71.6  0.7  0.7    1.8    1.9  14  29 c2t1d0
>  363.2   11.5 5693.2   71.6  0.7  0.7    1.9    2.0  14  30 c2t2d0
>  364.0   11.5 5692.7   71.6  0.7  0.7    1.9    1.9  14  30 c2t3d0
>  361.2   11.5 5672.8   71.6  0.7  0.7    1.9    1.9  14  30 c2t4d0
>  202.4  163.1 2915.2 2475.3  0.3  1.1    0.8    2.9   7  26 c2t5d0
>  170.4  190.4 2747.3 2757.6  0.5  1.3    1.5    3.6  11  31 c2t6d0
>  386.4   11.2 5659.0   71.6  0.5  0.6    1.3    1.5  12  27 c2t7d0
>   95.0    1.2   94.5   16.1  0.0  0.0    0.2    0.2   0   1 c0t0d0
>    0.9    1.2    3.3   16.1  0.0  0.0    7.5    1.9   0   0 c0t1d0
>                    extended device statistics
>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>  514.1   13.0 1937.7   65.7  0.2  0.8    0.3    1.5   5  27 c2t0d0
>  510.1   13.2 1943.1   65.7  0.2  0.8    0.5    1.6   6  29 c2t1d0
>  513.3   13.2 1926.3   65.8  0.2  0.8    0.3    1.5   5  28 c2t2d0
>  505.9   13.3 1936.7   65.8  0.2  0.9    0.3    1.8   5  30 c2t3d0
>  513.8   12.8 1890.1   65.8  0.2  0.8    0.3    1.5   5  26 c2t4d0
>    0.1  488.6    0.1 1216.5  0.0  2.2    0.0    4.6   0  33 c2t5d0
>  533.3   12.7 1875.3   65.9  0.1  0.7    0.2    1.3   4  24 c2t6d0
>  541.6   12.9 1923.2   65.8  0.1  0.7    0.2    1.2   3  23 c2t7d0
>    0.0    2.0    0.0    9.4  0.0  0.0    1.0    0.2   0   0 c0t0d0
>    0.0    2.0    0.0    9.4  0.0  0.0    1.0    0.2   0   0 c0t1d0
>                    extended device statistics
>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>  506.7    9.2 1906.9   50.2  0.6  0.2    1.2    0.5  20  23 c2t0d0
>  509.8    9.3 1909.5   50.2  0.6  0.2    1.2    0.4  19  23 c2t1d0
>  508.6    9.0 1900.4   50.2  0.7  0.3    1.4    0.5  21  25 c2t2d0
>  506.8    9.4 1897.2   50.3  0.6  0.2    1.2    0.5  19  23 c2t3d0
>  505.1    9.4 1852.4   50.4  0.6  0.2    1.2    0.5  19  23 c2t4d0
>    0.0  487.6    0.0 1227.9  0.0  3.5    0.0    7.2   0  46 c2t5d0
>  534.8    9.2 1855.6   50.2  0.6  0.2    1.0    0.4  18  22 c2t6d0
>  540.5    9.3 1891.4   50.2  0.5  0.2    1.0    0.4  17  21 c2t7d0
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0
>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t1d0
> 
> 
> 
> -- 
> Brandon High : bh...@freaks.com
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to