Just wanted to add that I'm in the exact same boat - I'm connecting from a 
Windows system and getting just horrid iSCSI transfer speeds.

I've tried updating to COMSTAR (although I'm not certain that I'm actually 
using it) to no avail, and I tried updating to the latest DEV version of 
OpenSolaris.  All that resulted from updating to the latest DEV version was a 
completely broken system that the I couldn't access the command line on.  
Fortunately i was able to roll back to the previous version and keep tinkering.

Anyone have any ideas as to what could really be causing this slowdown?

I've got 5-500GB Seagate Barracuda ES.2 drives that I'm using for my zpools, 
and I've done the following.

1 - zpool create data mirror c0t0d0 c0t1d0
2 - zfs create -s -V 600g data/iscsitarget
3 - sbdadm create-lu /dev/zvol/rdsk/data/iscsitarget
4 - stfadm add-view xxxxxxxxxxxxxxxxxxxxxx

So I've got a 500GB RAID1 zpool, and I've created a 600GB sparse volume on top 
of it, shared it via iSCSI, and connected to it.  Everything works stellar up 
until I copy files to it, then I get just sluggishness.

I start to copy a file from my windows 7 system to the iSCSI target, then pull 
up IOSTAT using this command : zpool iostat -v data 10

It shows me this : 

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data         895M   463G      0    666      0  7.93M
  mirror     895M   463G      0    666      0  7.93M
    c0t0d0      -      -      0    269      0  7.91M
    c0t1d0      -      -      0    272      0  7.93M
----------  -----  -----  -----  -----  -----  -----

So I figure, since ZFS is pretty sweet, how about I add some additional drives. 
 That should bump up my performance.

I execute this : 

zpool add data mirror c1t0d0 c1t1d0

It adds it to my zpool, and I run IOSTAT again, while the copy is still running.

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data        1.17G   927G      0    738  1.58K  8.87M
  mirror    1.17G   463G      0    390  1.58K  4.61M
    c0t0d0      -      -      0    172  1.58K  4.61M
    c0t1d0      -      -      0    175      0  4.61M
  mirror    42.5K   464G      0    348      0  4.27M
    c1t0d0      -      -      0    156      0  4.27M
    c1t1d0      -      -      0    159      0  4.27M
----------  -----  -----  -----  -----  -----  -----


I get a whopping extra 1MB/sec by adding two drives.  It fluctuates a lot, 
sometimes dropping down to 4MB/sec, sometimes rocketing all the way up to 
20MB/sec, but nothing consistent.

Basically, my transfer rates are the same no matter how many drives I add to 
the zpool.

Is there anything I am missing on this?

BTW - "test" server specs

AMD dual core 6000+
2GB RAM
Onboard Sata controller
Onboard Ethernet (gigabit)

I've got a very similar rig to the OP showing up next week (plus an infiniband 
card) I'd love to get this performing up to GB Ethernet speeds, otherwise I may 
have to abandon the iSCSI project if I can't get it to perform.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to