On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote:
Ok, now I know it's not related to the I/O performance, but to the ZFS itself.
At some time all 3 pools were locked in that way:
extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w
trn tot device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 1
0 1 c8t0d0
0.0 0.0 0.0 0.0 0.0 8.0 0.0 0.0 0 100 0 0
0 0 c7t0d0
Nope, most likely your disks or disk controller/driver. Note that you
have 8 outstanding I/O requests that aren't being serviced. Look in your
syslog, and I bet you'll see I/O timeout errors. I have seen this before
with Western Digital disks attached to an LSI controller using the mpt
driver. There was a lot of work diagnosing it, see the list archives -
an /etc/system change fixed it for me (set xpv_psm:xen_support_msi =
-1), but I was using a xen kernel. Note that replacing my disks with
larger Seagate ones made the problem go away as well.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss