Hi Bob,

I¹m assuming you¹re measuring sequential write speed ­ posting the iozone
results would help guide the discussion.

For the configuration you describe, you should definitely be able to sustain
200 MB/s write speed for a single file, single thread due to your use of
4Gbps Fibre Channel interfaces and RAID1.  Someone else brought up that with
host based mirroring over that interface you will be sending the data twice
over the FC-AL link, so since you only have 400 MB/s on the FC-AL interface
(load balancing will only work for two writes), then you have to divide that
by two.

If you do the mirroring on the RAID hardware you¹ll get double that speed on
writing, or 400MB/s and the bottleneck is still the single FC-AL interface.

By comparison, we get 750 MB/s sequential read using six 15K RPM 300GB disks
on an adaptec (Sun OEM) in-host SAS RAID adapter in RAID10 on four streams
and I think I saw 350 MB/s write speed on one stream.  Each disk is capable
of 130 MB/s of read and write speed.

- Luke


On 2/15/08 10:39 AM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote:

> On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>>> >>> What was the interlace on the LUN ?
>> >
>> > The question was about LUN  interlace not interface.
>> > 128K to 1M works better.
> 
> The "segment size" is set to 128K.  The max the 2540 allows is 512K.
> Unfortunately, the StorageTek 2540 and CAM documentation does not
> really define what "segment size" means.
> 
>> > Any compression ?
> 
> Compression is disabled.
> 
>> > Does turn off checksum helps the number (that would point to a CPU limited
>> > throughput).
> 
> I have not tried that but this system is loafing during the benchmark.
> It has four 3GHz Opteron cores.
> 
> Does this output from 'iostat -xnz 20' help to understand issues?
> 
>                      extended device statistics
>      r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>      3.0    0.7   26.4    3.5  0.0  0.0    0.0    4.2   0   2 c1t1d0
>      0.0  154.2    0.0 19680.3  0.0 20.7    0.0  134.2   0  59
> c4t600A0B80003A8A0B0000096147B451BEd0
>      0.0  211.5    0.0 26940.5  1.1 33.9    5.0  160.5  99 100
> c4t600A0B800039C9B500000A9C47B4522Dd0
>      0.0  211.5    0.0 26940.6  1.1 33.9    5.0  160.4  99 100
> c4t600A0B800039C9B500000AA047B4529Bd0
>      0.0  154.0    0.0 19654.7  0.0 20.7    0.0  134.2   0  59
> c4t600A0B80003A8A0B0000096647B453CEd0
>      0.0  211.3    0.0 26915.0  1.1 33.9    5.0  160.5  99 100
> c4t600A0B800039C9B500000AA447B4544Fd0
>      0.0  152.4    0.0 19447.0  0.0 20.5    0.0  134.5   0  59
> c4t600A0B80003A8A0B0000096A47B4559Ed0
>      0.0  213.2    0.0 27183.8  0.9 34.1    4.2  159.9  90 100
> c4t600A0B800039C9B500000AA847B45605d0
>      0.0  152.5    0.0 19453.4  0.0 20.5    0.0  134.5   0  59
> c4t600A0B80003A8A0B0000096E47B456DAd0
>      0.0  213.2    0.0 27177.4  0.9 34.1    4.2  159.9  90 100
> c4t600A0B800039C9B500000AAC47B45739d0
>      0.0  213.2    0.0 27195.3  0.9 34.1    4.2  159.9  90 100
> c4t600A0B800039C9B500000AB047B457ADd0
>      0.0  154.4    0.0 19711.8  0.0 20.7    0.0  134.0   0  59
> c4t600A0B80003A8A0B0000097347B457D4d0
>      0.0  211.3    0.0 26958.6  1.1 33.9    5.0  160.6  99 100
> c4t600A0B800039C9B500000AB447B4595Fd0
> 
> Bob
> ======================================
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to