The speed that each tray operates at on the 6140, depends on the type of disk installed in the tray. I am pretty sure that all of the 10K disks only allow the tray to operate at 2Gb/s. Information can be found in the Hardware Installation manual. http://dlc.sun.com/pdf/819-7497-11/819-7497-11.pdf
Ernie

adrian cockcroft wrote:
So you may be maxing out a single controller at 2Gbits/s, which would
give you about this level of performance. Are you sure that the
HBA-SAN-6140 datapath is actually running at 4Gbit/s as you stated?

Adrian

On 11/17/07, Asif Iqbal <[EMAIL PROTECTED]> wrote:
Without attachment since 40K is max.

Looks like 194MB/s is max I can write on zpool on controller c6


 iostat -xnMCez 10 10
                            extended device statistics       ---- errors ---
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w
trn tot device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.3   0   0   1   0   0   1 c0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.3   0   0   1   0
0   1 c0t0d0
    1.1    4.4    0.0    0.3  0.0  0.5    0.1   87.0   0   2   0   0   0   0 c1
    0.5    2.0    0.0    0.1  0.0  0.2    0.1   88.3   0   1   0   0
0   0 c1t0d0
    0.5    2.4    0.0    0.1  0.0  0.3    0.0   85.8   0   1   0   0
0   0 c1t1d0
  328.3  349.8   12.0   14.4  0.0 33.3    0.0   49.1   0 350  86   0   0  86 c6

[...9 more instances..]

 cat iostat.txt | grep -w c6

    0.1 1877.8    0.0  190.5  0.0 682.5    0.0  363.5   0 3677  86   0
  0  86 c6
    0.0 1453.5    0.0  132.6  0.0 509.2    0.0  350.3   0 2562  86   0
  0  86 c6
    0.7 1817.3    0.0  189.9  0.0 907.4    0.0  499.1   0 3643  86   0
  0  86 c6
    0.0 1483.8    0.0  138.0  0.0 514.9    0.0  347.0   0 2686  86   0
  0  86 c6
    0.8 1467.2    0.0  138.5  0.0 669.9    0.0  456.3   0 2672  86   0
  0  86 c6
    0.4 1812.5    0.0  193.3  0.0 979.5    0.0  540.3   0 3735  86   0
  0  86 c6
    0.5 1487.6    0.0  140.2  0.0 506.2    0.0  340.2   0 2747  86   0
  0  86 c6
    0.4 1480.1    0.0  140.7  0.0 742.1    0.0  501.3   0 2718  86   0
  0  86 c6
    0.3 1862.8    0.0  194.2  0.0 882.5    0.0  473.6   0 3755  86   0
  0  86 c6



---------- Forwarded message ----------
From: Asif Iqbal <[EMAIL PROTECTED]>
Date: Nov 17, 2007 5:00 PM
Subject: Re: [perf-discuss] zpool io to 6140 is really slow
To: adrian cockcroft <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED], perf-discuss@opensolaris.org,
[EMAIL PROTECTED]


Looks like max IO write I get is ~ 194MB/s on controller c6, where the
zpool is on


On Nov 17, 2007 3:29 PM, adrian cockcroft <[EMAIL PROTECTED]> wrote:

What do you get from iostat? Try something like

% iostat -xnMCez 10 10

(extended, named, Mbyte, controller, errors, nonzero, interval 10
secs, 10 measurements)

Post the results and you may get more commentary...

Adrian


On 11/17/07, Asif Iqbal <[EMAIL PROTECTED]> wrote:
(Including storage-discuss)

I have 6 6140s with 96 disks. Out of which 64 of them are Seagate
ST3300007FC (300GB - 10000 RPM FC-AL)

I created 16k seg size raid0 luns using single fcal disks. Then
created a zpool with 8 4+1 raidz1 using those luns, out of single
disks. Also set the zfs nocache flush to `1' to
take advantage of the 2G NVRAM cache of the controllers.

I am using one port per controller. Rest of them are down (not in
use). Each controller port
speed is 4Gbps.

All luns have one controller as primary and second one as secondary

I am getting only 125MB/s according to the zpool IO.

I should get ~ 512MB/s per IO.

Also is it possible to get 2GB/s IO by using the leftover ports of the
controllers?

Is it also possible to get 4GB/s IO by aggregating the controllers (w/
8 ports totat)?



On Nov 16, 2007 5:30 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
I have the following layout

A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
A1 anfd B1 controller port 4Gbps speed.
Each controller has 2G NVRAM

On 6140s I setup raid0 lun per SAS disks with 16K segment size.

On 490 I created a zpool with 8 4+1 raidz1s

I am getting zpool IO of only 125MB/s with zfs:zfs_nocacheflush = 1 in
/etc/system

Is there a way I can improve the performance. I like to get 1GB/sec IO.

Currently each lun is setup as primary A1 and secondary B1 or vice versa

I also have write cache eanble according to CAM

--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu


--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org


--

Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu



--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org


_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to