Scott Lawson writes:
> Also you may wish to look at the output of 'iostat -xnce 1' as well.
>
> You can post those to the list if you have a specific problem.
>
> You want to be looking for error counts increasing and specifically 'asvc_t'
> for the service times on the disks. I higher num
Running "iostat -nxce 1", I saw write sizes alternate between two raidz groups
in the same pool.
At one time, drives on cotroller 1 have larger writes (3-10 times) than ones on
controller2:
extended device statistics errors ---
r/sw/s
Maybe you can run a Dtrace probe using Chime?
http://blogs.sun.com/observatory/entry/chime
Initial Traces -> Device IO
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Also you may wish to look at the output of 'iostat -xnce 1' as well.
You can post those to the list if you have a specific problem.
You want to be looking for error counts increasing and specifically 'asvc_t'
for the service times on the disks. I higher number for asvc_t may help to
isolate poo
You can try:
zpool iostat pool_name -v 1
This will show you IO on each vdev at one second intervals. Perhaps you will
see different IO behavior on any suspect drive.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-