Bill, Sommerfeld, Sorry,

        However, I am trying to explain what I think is
        happening on your system and why I consider this
        normal.

        Most of the reads/FS "replace" are normally 
        at the block level.

        To copy a FS, some level of reading MUST be done
        at the orig_dev.
        At what level and whether it is recorded as a
        normal vnode read  / mmap op for the direct and
        indirect blocks is another story.

        But it is being done. It is just not being
        recorded in FS stats. Read stats are normally used
        for normal FS object access requests.

        Secondly, maybe starting with the ?uberblock?, the
        rest of the meta data is probably being read. And
        because of the normal asyn access of FSs, it would
        not surprise me that then each znode's access time
        field is updated. Remember, that unless you are just
        touching a FS low-level(file) object, all writes are
        proceeded by at least 1 read.

        Mitchell Erblich
        ----------------

        

        
Bill Sommerfeld wrote:
> 
> On Thu, 2006-11-09 at 19:18 -0800, Erblichs wrote:
> > Bill Sommerfield,
> 
> Again, that's not how my name is spelled.
> 
> >       With some normal sporadic read failure, accessing
> >       the whole spool may force repeated reads for
> >       the replace.
> 
> please look again at the iostat I posted:
> 
>                   capacity     operations    bandwidth
> pool            used  avail   read  write   read  write
> -------------  -----  -----  -----  -----  -----  -----
> z               306G   714G  1.43K    658  23.5M  1.11M
>   raidz1        109G   231G  1.08K    392  22.3M   497K
>     replacing      -      -      0   1012      0  5.72M
>       c1t4d0       -      -      0    753      0  5.73M
>       c1t5d0       -      -      0    790      0  5.72M
>     c2t12d0        -      -    339    177  9.46M   149K
>     c2t13d0        -      -    317    177  9.08M   149K
>     c3t12d0        -      -    330    181  9.27M   147K
>     c3t13d0        -      -    352    180  9.45M   146K
>   raidz1        100G   240G    117    101   373K   225K
>     c1t3d0         -      -     65     33  3.99M  64.1K
>     c2t10d0        -      -     60     44  3.77M  63.2K
>     c2t11d0        -      -     62     42  3.87M  63.4K
>     c3t10d0        -      -     63     42  3.88M  62.3K
>     c3t11d0        -      -     65     35  4.06M  61.8K
>   raidz1       96.2G   244G    234    164   768K   415K
>     c1t2d0         -      -    129     49  7.85M   112K
>     c2t8d0         -      -    133     54  8.05M   112K
>     c2t9d0         -      -    132     56  8.08M   113K
>     c3t8d0         -      -    132     52  8.01M   113K
>     c3t9d0         -      -    132     49  8.16M   112K
> 
> there were no (zero, none, nada, zilch) reads directed to the failing
> device.  there were a lot of WRITES to the failing device; in fact, the
> the same volume of data was being written to BOTH the failing device and
> the new device.
> 
> >       So, I was thinking that a read access
> >       that could ALSO be updating the znode. This newer
> >       time/date stamp is causing alot of writes.
> 
> that's not going to be significant as a source of traffic; again, look
> at the above iostat, which was representative of the load throughout the
> resilver.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to