Please also check http://www.microsoft.com/downloads/details.aspx?familyid=12CB3C1A-15D6-4585- B385-BEFD1319F825&displaylang=en
best regards Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email [EMAIL PROTECTED] -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of John Tracy Sent: 19 Şubat 2008 Salı 22:02 To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06) Hello All- I've been creating iSCSI targets on the following two boxes: - Sun Ultra 40 M2 with eight 10K SATA disks - Sun x2200 M2, with two 15K RPM SAS drives Both were running build 82 I'm creating a zfs volume, and sharing it with "zfs set shareiscsi=on poolname/volume". I can access the iSCSI volume without any problems, but IO is terribly slow, as in five megabytes per second sustained transfers. I've tried creating an iSCSI target stored on a UFS filesystem, and get the same slow IO. I've tried every level of RAID available in ZFS with the same results. The client machines are Windows 2003 Enterprise Edition SP2, running Microsoft iSCSI initiator 2.06, and Windows XP SP2, running MS iSCSI initiator 2.06. I've tried moving some of the client machines to the same physical switch as the target servers, and get the same results. I've tried another switch, and get the same results. I've even physically isolated the computers from my network, and get the same results. I'm not sure where to go from here and what to try next. The network is all gigabit. I normally have the Solaris boxes in a 802.3ad LAG group, tying two physical NICs together which should give me a max of 2gb/s of bandwidth (250 megabytes per second). Of course, I've tried no LAG connections with the same results. In short, I've tried every combination of everything I know to try, except using a different iSCSI client/server software stack (well, I did try the 2.05 version of MS's iSCSI initiator client--same result). Here is what I'm seeing with performance logs on the Windows side- On any of the boxes, I see the queue length for the "hard disk" (iSCSI target) go from under 1 to 600+, and then back to under 1 about every four or five seconds. On the Solaris side, I'm running "iostat -xtc 1" which shows me lots of IO activity on the hard drives associated with my ZFS pool, and then about three or four seconds of pause, and then lots of activity again for a second or two, and then a lull again, and the cycle repeats as long as I'm doing active sustained IO against the iSCSI target. The output of prstat doesn't show any heavy processor/memory usage on the Solaris box. I'm not sure what other monitors to run on either side to get a better picture. Any recommendations on how to proceed? Does anybody else use the Solaris iSCSI target software to export iSCSI targets to initiators running the MS iSCSI initiator? Thank you- John This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss