Hi Tim;

 

2540 controler can achieve maximum 250 MB/sec on writes on the first 12
drives. So you are pretty close to maximum throughput already. 

Raid 5 can be a little bit slower. 

 

Please try to distribute Lun's between controllers and try to benchmark by
disabling cache mirroring. (it's different then disableing cache) 

 

Best regards

Mertol

 

 

 

 

 


 <http://www.sun.com/> http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +902123352222
Email [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> 

 

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tim
Sent: 15 Şubat 2008 Cuma 03:13
To: Bob Friesenhahn
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Performance with Sun StorageTek 2540

 

On 2/14/08, Bob Friesenhahn <[EMAIL PROTECTED]> wrote:

Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links.  This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.

My objective is to obtain the best single-file write performance.
Unfortunately, I am hitting some sort of write bottleneck and I am not
sure how to solve it.  I was hoping for a write speed of 300MB/second.
With ZFS on top of a firmware managed RAID 0 across all 12 drives, I
hit a peak of 200MB/second.  With each drive exported as a LUN and a
ZFS pool of 6 pairs, I see a write rate of 154MB/second.  The number
of drives used has not had much effect on write rate.

Information on my pool is shown at the end of this email.

I am driving the writes using 'iozone' since 'filebench' does not seem
to want to install/work on Solaris 10.

I am suspecting that the problem is that I am running out of IOPS
since the drive array indicates a an average IOPS of 214 for one drive
even though the peak write speed is only 26MB/second (peak read is
42MB/second).

Can someone share with me what they think the write bottleneck might
be and how I can surmount it?

Thanks,

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

% zpool status
   pool: Sun_2540
  state: ONLINE
  scrub: none requested
config:

         NAME                                       STATE     READ WRITE
CKSUM
         Sun_2540                                   ONLINE       0     0
0
           mirror                                   ONLINE       0     0
0
             c4t600A0B80003A8A0B0000096A47B4559Ed0  ONLINE       0     0
0
             c4t600A0B80003A8A0B0000096E47B456DAd0  ONLINE       0     0
0
           mirror                                   ONLINE       0     0
0
             c4t600A0B80003A8A0B0000096147B451BEd0  ONLINE       0     0
0
             c4t600A0B80003A8A0B0000096647B453CEd0  ONLINE       0     0
0
           mirror                                   ONLINE       0     0
0
             c4t600A0B80003A8A0B0000097347B457D4d0  ONLINE       0     0
0
             c4t600A0B800039C9B500000A9C47B4522Dd0  ONLINE       0     0
0
           mirror                                   ONLINE       0     0
0
             c4t600A0B800039C9B500000AA047B4529Bd0  ONLINE       0     0
0
             c4t600A0B800039C9B500000AA447B4544Fd0  ONLINE       0     0
0
           mirror                                   ONLINE       0     0
0
             c4t600A0B800039C9B500000AA847B45605d0  ONLINE       0     0
0
             c4t600A0B800039C9B500000AAC47B45739d0  ONLINE       0     0
0
           mirror                                   ONLINE       0     0
0
             c4t600A0B800039C9B500000AB047B457ADd0  ONLINE       0     0
0
             c4t600A0B800039C9B500000AB447B4595Fd0  ONLINE       0     0
0

errors: No known data errors
freddy:~% zpool iostat
                capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
Sun_2540    64.0G  1.57T    808    861  99.8M   105M
freddy:~% zpool iostat -v
                                            capacity     operations
bandwidth
pool                                     used  avail   read  write   read
write
--------------------------------------  -----  -----  -----  -----  -----
-----
Sun_2540                                64.0G  1.57T    809    860   100M
105M
   mirror                                10.7G   267G    135    143  16.7M
17.6M
     c4t600A0B80003A8A0B0000096A47B4559Ed0      -      -     66    141
8.37M  17.6M
     c4t600A0B80003A8A0B0000096E47B456DAd0      -      -     67    141
8.37M  17.6M
   mirror                                10.7G   267G    135    143  16.7M
17.6M
     c4t600A0B80003A8A0B0000096147B451BEd0      -      -     66    141
8.37M  17.6M
     c4t600A0B80003A8A0B0000096647B453CEd0      -      -     66    141
8.37M  17.6M
   mirror                                10.7G   267G    134    143  16.7M
17.6M
     c4t600A0B80003A8A0B0000097347B457D4d0      -      -     66    141
8.34M  17.6M
     c4t600A0B800039C9B500000A9C47B4522Dd0      -      -     66    141
8.32M  17.6M
   mirror                                10.7G   267G    134    143  16.6M
17.6M
     c4t600A0B800039C9B500000AA047B4529Bd0      -      -     66    141
8.32M  17.6M
     c4t600A0B800039C9B500000AA447B4544Fd0      -      -     66    141
8.30M  17.6M
   mirror                                10.7G   267G    134    143  16.6M
17.6M
     c4t600A0B800039C9B500000AA847B45605d0      -      -     66    141
8.31M  17.6M
     c4t600A0B800039C9B500000AAC47B45739d0      -      -     66    141
8.30M  17.6M
   mirror                                10.7G   267G    134    143  16.6M
17.6M
     c4t600A0B800039C9B500000AB047B457ADd0      -      -     66    141
8.30M  17.6M
     c4t600A0B800039C9B500000AB447B4595Fd0      -      -     66    141
8.29M  17.6M
--------------------------------------  -----  -----  -----  -----  -----
-----


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



If you're going for best single file write performance, why are you doing
mirrors of the LUNs?  Perhaps I'm misunderstanding why you went from one
giant raid-0 to what is essentially a raid-10.

--Tim

<<attachment: image001.gif>>

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to