The drives are all connected to the motherboard's (Intel S3210SHLX) SATA ports.

I've scrubbed the pool several times in the last two days, no errors:

[EMAIL PROTECTED]:~# zpool status -v
  pool: main_pool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        main_pool   ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c5t5d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0

errors: No known data errors

I appreciate your feedback, I had not thought to aggregate the stats
and check the aggregate.

Thanks,
Charles

On Fri, Nov 21, 2008 at 3:24 PM, Will Murnane <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 14:35, Charles Menser <[EMAIL PROTECTED]> wrote:
>> I have a 5 drive raidz2 pool which I have a iscsi share on. While
>> backing up a MacOS drive to it I noticed some very strange access
>> patterns, and wanted to know if what I am seeing is normal, or not.
>>
>> There are times when all five drives are accessed equally, and there
>> are times when only three of them are seeing any load.
> What does "zpool status" say?  How are the drives connected?  To what
> controller(s)?
>
> This  could just be some degree of asynchronicity showing up.  Take a
> look at these two:
>              capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> main_pool    852G  3.70T    361  1.30K  2.78M  10.1M
>  raidz2     852G  3.70T    361  1.30K  2.78M  10.1M
>   c5t5d0      -      -    180    502  1.25M  3.57M
>   c5t3d0      -      -    205    330  1.30M  2.73M
>   c5t4d0      -      -    239    489  1.43M  2.81M
>   c5t2d0      -      -    205     17  1.25M  26.1K
>   c5t1d0      -      -    248     13  1.41M  25.1K
> ----------  -----  -----  -----  -----  -----  -----
>
>              capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> main_pool    852G  3.70T     10  2.02K  77.7K  15.8M
>  raidz2     852G  3.70T     10  2.02K  77.7K  15.8M
>   c5t5d0      -      -      2    921   109K  6.52M
>   c5t3d0      -      -      9    691   108K  5.63M
>   c5t4d0      -      -      9    962   105K  5.97M
>   c5t2d0      -      -      9  1.30K   167K  8.50M
>   c5t1d0      -      -      2  1.23K   150K  8.54M
> ----------  -----  -----  -----  -----  -----  -----
>
> For c5t5d0, a total of 3.57+6.52 MB of IO happen: 10.09 MB;
> For c5t3d0, a total of 2.73+5.63 MB of IO happen: 8.36 MB;
> For c5t4d0, a total of 2.81+5.97 MB of IO happen: 8.78 MB;
> For c5t2d0, a total of (~0)+8.50 MB of IO happen: 8.50 MB;
> and for c5t1d0, a total of (~0) + 8.54 MB of IO happen: 8.54 MB.
>
> So over time, the amount written to each drive is approximately the
> same.  This being the case, I don't think I'd worry about it too
> much... but a scrub is a fairly cheap way to get peace of mind.
>
> Will
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to