On Nov 19, 2007 1:43 AM, Louwtjie Burger <[EMAIL PROTECTED]> wrote:
> On Nov 17, 2007 9:40 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
> > (Including storage-discuss)
> >
> > I have 6 6140s with 96 disks. Out of which 64 of them are Seagate
> > ST3300007FC (300GB - 10000 RPM FC-AL)
>
> Those disks are 2Gb disks, so the tray will operate at 2Gb.
>

That is still 256MB/s . I am getting about 194MB/s


> > I created 16k seg size raid0 luns using single fcal disks. Then
>
> You "showed" the single disks as LUN's to the host... if I understand 
> correctly.

Yes

>
> Q: Why 16K?

To avoid segment crossing. It will mainly be used fro oracle db whose
block size is 16K

>
> > created a zpool with 8 4+1 raidz1 using those luns, out of single
>
> What is the layout here? Inside 1 tray, over multiple trays?

Over multiple trays

>
> > disks. Also set the zfs nocache flush to `1' to
> > take advantage of the 2G NVRAM cache of the controllers.
> >
> > I am using one port per controller. Rest of them are down (not in
> > use). Each controller port
> > speed is 4Gbps.
> >
>
> The 6140 is assymetric and as such the second controller will be
> available in fail-over mode, it is not actively used for load
> balancing.

So there is no way to create a aggreated channel off of both controllers?

>
> You need to hook up more FC links to the primary controller that has
> the active LUN's assigned, that is the only way to easily get more
> IOP's.

Adding a second loop by adding another non active port I may have to rebuild the
FS, no?

> > All luns have one controller as primary and second one as secondary
> >
> > I am getting only 125MB/s according to the zpool IO.
> >
>
> Seems a tad low, how are you testing?
>
> > I should get ~ 512MB/s per IO.
>
> Hmmm, how did you get to this total? Keeping in mind that your tray is
> sitting at 2Gb and your extensions to the CSM trays are all single
> channel... you will get a 2Gb ceiling. Also have a look at

Even for the OS IO? So the controller nvram does not help increase the
IO for OS?

> http://en.wikipedia.org/wiki/Fibre_Channel#History
>
> At first glance and not knowing the exact setup I would say that you
> will not get more than 200MB/s (if that much).

I am gettin 194MB/s. Hmm my 490 has 16G memory. I really I could benefit some
from OS and controller RAM, atleast for Oracle IO

>
> Any reason why you are not using the RAID controller to do the work for you?

They are raid0 luns. So raid controller is in use. I get higher IO
from zpool off of raid0 luns
of single disks then raid5 type lun or raid0 among multilple disks as
one lun and then zpool
on top

>
> > Also is it possible to get 2GB/s IO by using the leftover ports of the
> > controllers?
> >
> > Is it also possible to get 4GB/s IO by aggregating the controllers (w/
> > 8 ports totat)?
> >
> >
> >
> > On Nov 16, 2007 5:30 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
> > > I have the following layout
> > >
> > > A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
> > > A1 anfd B1 controller port 4Gbps speed.
> > > Each controller has 2G NVRAM
> > >
> > > On 6140s I setup raid0 lun per SAS disks with 16K segment size.
> > >
> > > On 490 I created a zpool with 8 4+1 raidz1s
> > >
> > > I am getting zpool IO of only 125MB/s with zfs:zfs_nocacheflush = 1 in
> > > /etc/system
> > >
> > > Is there a way I can improve the performance. I like to get 1GB/sec IO.
> > >
> > > Currently each lun is setup as primary A1 and secondary B1 or vice versa
> > >
> > > I also have write cache eanble according to CAM
> > >
> > > --
> > > Asif Iqbal
> > > PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
> > >
> >
> >
> >
> > --
> > Asif Iqbal
> > PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
> > _______________________________________________
> > storage-discuss mailing list
> > [EMAIL PROTECTED]
> > http://mail.opensolaris.org/mailman/listinfo/storage-discuss
> >
>



-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to