Definitely use Comstar as Tim says.

At home I'm using 4*WD Caviar Blacks on an AMD Phenom x4 @ 1.Ghz and
only 2GB of RAM. I'm running svn132. No HBA - onboard SB700 SATA
ports.$

I can, with IOmeter, saturate GigE from my WinXP laptop via iSCSI.

Can you toss the RAID controller aside an use motherboard SATA ports
with just a few drives? That could help highlight if its the RAID
controler or not, and even one drive has better throughput than you're
seeing.

Cache, ZIL, and vdev tweaks are great - but you're not seeing any of
those bottlnecks, I can assure you.

-marc

On 2/10/10, Tim Cook <t...@cook.ms> wrote:
> On Wed, Feb 10, 2010 at 4:06 PM, Brian E. Imhoff
> <beimh...@hotmail.com>wrote:
>
>> I am in the proof-of-concept phase of building a large ZFS/Solaris based
>> SAN box, and am experiencing absolutely poor / unusable performance.
>>
>> Where to begin...
>>
>> The Hardware setup:
>> Supermicro 4U 24 Drive Bay Chassis
>> Supermicro X8DT3 Server Motherboard
>> 2x Xeon E5520 Nehalem 2.26 Quad Core CPUs
>> 4GB Memory
>> Intel EXPI9404PT 4 port 1000GB Server Network Card (used for ISCSI traffic
>> only)
>> Adaptec 52445 28 Port SATA/SAS Raid Controller connected to
>> 24x Western Digital WD1002FBYS 1TB Enterprise drives.
>>
>> I have configured the 24 drives as single simple volumes in the Adeptec
>> RAID BIOS , and are presenting them to the OS as such.
>>
>> I then, Create a zpool, using raidz2, using all 24 drives, 1 as a
>> hotspare:
>> zpool create tank raidz2 c1t0d0 c1t1d0 [....] c1t22d0 spare c1t23d00
>>
>> Then create a volume store:
>> zfs create -o canmount=off tank/volumes
>>
>> Then create a 10 TB volume to be presented to our file server:
>> zfs create -V 10TB -o shareiscsi=on tank/volumes/fsrv1data
>>
>> From here, I discover the iscsi target on our Windows server 2008 R2 File
>> server, and see the disk is attached in Disk Management.  I initialize the
>> 10TB disk fine, and begin to quick format it.  Here is where I begin to
>> see
>> the poor performance issue.   The Quick Format took about 45 minutes. And
>> once the disk is fully mounted, I get maybe 2-5 MB/s average to this disk.
>>
>> I have no clue what I could be doing wrong.  To my knowledge, I followed
>> the documentation for setting this up correctly, though I have not looked
>> at
>> any tuning guides beyond the first line saying you shouldn't need to do
>> any
>> of this as the people who picked these defaults know more about it then
>> you.
>>
>> Jumbo Frames are enabled on both sides of the iscsi path, as well as on
>> the
>> switch, and rx/tx buffers increased to 2048 on both sides as well.  I know
>> this is not a hardware / iscsi network issue.  As another test, I
>> installed
>> Openfiler in a similar configuration (using hardware raid) on this box,
>> and
>> was getting 350-450 MB/S from our fileserver,
>>
>> An "iostat -xndz 1" readout of the "%b% coloum during a file copy to the
>> LUN shows maybe 10-15 seconds of %b at 0 for all disks, then 1-2 seconds
>> of
>> 100, and repeats.
>>
>> Is there anything I need to do to get this usable?  Or any additional
>> information I can provide to help solve this problem?  As nice as
>> Openfiler
>> is, it doesn't have ZFS, which is necessary to achieve our final goal.
>>
>>
>>
> You're extremely light on ram for a system with 24TB of storage and two
> E5520's.  I don't think it's the entire source of your issue, but I'd
> strongly suggest considering doubling what you have as a starting point.
>
> What version of opensolaris are you using?  Have you considered using
> COMSTAR as your iSCSI target?
>
> --Tim
>

-- 
Sent from my mobile device
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to