On 9/22/06, Gino Ruopolo <[EMAIL PROTECTED]> wrote:
> Update ...
>
> iostat output during "zpool scrub"
>
> extended device statistics
>
>   w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b
> 34         2.0  395.2    0.1    0.6  0.0 34.8   87.7
>   0 100
> 35        21.0  312.2    1.2    2.9  0.0 26.0   78.0
>   0  79
> 36        20.0    1.0    1.2    0.0  0.0  0.7   31.4
>   0  13
> 37        20.0    1.0    1.0    0.0  0.0  0.7   35.1
>   0  21
> sd34 is always at 100% ...


  pool: zpool1
 state: ONLINE
 scrub: scrub in progress, 0.13% done, 72h39m to go
config:

        NAME                                       STATE     READ WRITE CKSUM
        zpool1                                    ONLINE       0     0     0
          raidz                                    ONLINE       0     0     0
            c4t60001FE100118DB000091190724700C7d0  ONLINE       0     0     0
            c4t60001FE100118DB000091190724700C9d0  ONLINE       0     0     0
            c4t60001FE100118DB000091190724700CBd0  ONLINE       0     0     0
            c4t60001FE100118DB000091190724700CCd0  ONLINE       0     0     0

72hours?? isn't too much for 370GB of data?


This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


For what it's worth, I've found that usually, within the first ~5m or
so of starting a scrub, the time estimate is disproportionate to the
actual time the scrub will take.

- Rich
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to