On 9/22/06, Gino Ruopolo <[EMAIL PROTECTED]> wrote:
Update ...
iostat output during "zpool scrub"
extended device statistics
device r/s w/s Mr/s Mw/s wait actv svc_t %w %b
sd34 2.0 395.2 0.1 0.6 0.0 34.8 87.7 0 100
sd35 21.0 312.2 1.2 2.9 0.0 26.0 78.0 0 79
sd36 20.0 1.0 1.2 0.0 0.0 0.7 31.4 0 13
sd37 20.0 1.0 1.0 0.0 0.0 0.7 35.1 0 21
sd34 is always at 100% ...
What is strange, is that this is almost all writes. Do you have
the rsync running at this time? A scrub alone should not look
like this.
I have also observed some strange behavior on a 4 disk raidz,
which may be related. It is possible to saturate a single disk,
while all the others in the same vdev are completely idle. It
is very easy to reproduce, so try the following:
Create a filesystem with a 4k recordsize on a 4 disk raidz.
Now, copy a large file to it, while observing 'iostat -xnz 5'.
This is the worst case I have been able to produce, but the
imbalance is apparent even with an untar at the default
recordsize.
Interestingly, it is always the last disk in the set which is busy.
This behavior does not occur with a 3 disk raidz, nor is it as
bad with other record sizes.
Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss