[EMAIL PROTECTED] said:
> It's not that old.  It's a Supermicro system with a 3ware 9650SE-8LP.
> Open-E iSCSI-R3 DOM module.  The system is plenty fast.  I can pretty
> handily pull 120MB/sec from it, and write at over 100MB/sec.  It falls  apart
> more on random I/O.  The server/initiator side is a T2000 with  Solaris 10u4.
>  It never sees over 25% CPU, ever.  Oh yeah, and two 1GB  network links to
> the SAN 
> . . .
> My opinion is, if when the array got really loaded up, everything slowed
> down evenly, users wouldn't mind or notice much.  But when every 20 or  so
> reads/writes gets delayed my 10s of seconds, the users start to line  up at
> my door. 

Hmm, I have no experience with iSCSI yet.  But behavior of our T2000
file/NFS server connected via 2Gbit fiber channel SAN is exactly as
you describe when our HDS SATA array gets behind.  Access to other
ZFS pools remains unaffected, but any access to the busy pool just
hangs.  Some Oracle apps on NFS clients die due to excessive delays.

In our case, this old HDS array's SATA shelves have a very limited queue
depth (four per RAID controller) in the "back end" loop, plus every write
is hit with the added overhead of an in-array read-back verification.
Maybe your iSCSI situation injects enough latency at higher loads to
cause something like our FC queue limitations.

Good luck,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to