>
> It could be a disk failing and dragging I/O down with it.
>
> Try to check for high asvc_t with `iostat -XCn 1` and errors in `iostat -En`
>
> Any timeouts or retries in /var/adm/messages ?
>
> --
> Giovanni Tirloni
> gtirl...@sysdroid.com
>

I checked for high service times during a scrub, and all disks are
pretty equal.During a scrub, each disks peaks about 350 reads/sec,
with an asvc time of up to 30 during those read spikes (I assume it
means 30ms, which isn't terrible for a highly loaded SATA disk).
No errors reported by smartctl, iostat, or adm/messages

I opened a case on Sunsolve, but I fear since I am running a dev build
that I will be out of luck. I cannot run 2009.06 due to CIFS
segfaults, and problems with zfs send/recv hanging pools (well
documented issues).
I'd run Solaris proper, but not having in-kernel CIFS or COMSTAR would
be a major setback for me.



-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to