On Nov 21, 2007 10:37 AM, Lion, Oren-P64304 <[EMAIL PROTECTED]> wrote:
>
> I recently tweaked Oracle (8K blocks, log_buffer gt 2M) on a Solaris

Oracle here is setup as 16K and 2G log buffer.

I am using a testpool with raid0 of 6 10K RPM FC disks (2 from each of
3 trays).

I played with 16K and 32K segment size only. I will try other sizes
and post here
the performance.

Thanks for sharing the dtrace result as well. Excellent data!

> AMD64 system for max performance on a Sun 6140 with one tray of 73 GB
> 15K RPM drives. Definitely needed to place the datafiles and redo logs
> on isolated RAID groups. Wasn't sure how many blocks Oracle batches for
> IO. Used dtrace's bitesize script to generate the distributions below.
> Based on the dtrace output, and after testing multiple segment sizes,
> finally settled on Segment Size (stripe size) 256K for both datafiles
> and redo logs.
>
> Also observed performance boost by using forcedirectio and noatime on
> the 6140 mount points and observed smoother performance by using 2M
> pagesize (MPSS) by adding the line below to Oracle's .profile (and
> verified with pmap -s [ORACLE PID]|grep 2M).
>
> Oracle MPSS .profile
> LD_PRELOAD=$LD_PRELOAD:mpss.so.1
> MPSSHEAP=2M
> MPSSSTACK=2M
> export LD_PRELOAD MPSSHEAP MPSSSTACK
> MPSSERRFILE=~/mpsserr
> export MPSSERRFILE
>
> Here's the final 6140 config:
> Oracle datafiles => 12 drives RAID 10 Sement Size 256
> Oracle redo log A => 2 drives RAID 0 Sement Size 256
> Oracle redo log B => 2 drives RAID 0 Sement Size 256
>
> ./bitesize.d
>  1452  ora_dbw2_prf02\0
>
>            value  ------------- Distribution ------------- count
>            16384 |                                         0
>            32768 |@@@@@@@@@@@@@@@@@@@@                     1
>            65536 |                                         0
>           131072 |@@@@@@@@@@@@@@@@@@@@                     1
>           262144 |                                         0
>
>     1454  ora_dbw3_prf02\0
>
>            value  ------------- Distribution ------------- count
>             4096 |                                         0
>             8192 |@@@@@@@@@@@@@@@@@@@@@@@                  4
>            16384 |@@@@@@                                   1
>            32768 |@@@@@@                                   1
>            65536 |                                         0
>           131072 |@@@@@@                                   1
>           262144 |                                         0
>
>     1448  ora_dbw0_prf02\0
>
>            value  ------------- Distribution ------------- count
>             4096 |                                         0
>             8192 |@@@@@@@@@@@@@@@@@@@@@@                   5
>            16384 |@@@@@@@@@@@@@                            3
>            32768 |                                         0
>            65536 |                                         0
>           131072 |@@@@                                     1
>           262144 |                                         0
>
>     1450  ora_dbw1_prf02\0
>
>            value  ------------- Distribution ------------- count
>            65536 |                                         0
>           131072 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
>           262144 |                                         0
>
>     1458  ora_ckpt_prf02\0
>
>            value  ------------- Distribution ------------- count
>             8192 |                                         0
>            16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 43
>            32768 |                                         0
>
>     1456  ora_lgwr_prf02\0
>
>            value  ------------- Distribution ------------- count
>              256 |                                         0
>              512 |@@@@@@@@                                 24
>             1024 |@@@@                                     12
>             2048 |@@@@@                                    15
>             4096 |@@@@@                                    14
>             8192 |                                         0
>            16384 |                                         1
>            32768 |@                                        4
>            65536 |                                         0
>           131072 |@                                        4
>           262144 |@@                                       6
>           524288 |@@@@@@@@@@@@@@                           42
>          1048576 |                                         0
>
>
> This email message is for the sole use of the intended recipient(s) and
> may contain GDC4S confidential or privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If
> you are not an intended recipient, please contact the sender by reply
> email and destroy all copies of the original message.
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Asif Iqbal
> Sent: Tuesday, November 20, 2007 3:08 PM
> To: [EMAIL PROTECTED]
> Cc: zfs-discuss@opensolaris.org; [EMAIL PROTECTED];
> [EMAIL PROTECTED]
> Subject: Re: [perf-discuss] [storage-discuss] zpool io to 6140 is really
> slow
>
>
> On Nov 20, 2007 10:40 AM, Andrew Wilson <[EMAIL PROTECTED]> wrote:
> >
> >  What kind of workload are you running. If you are you doing these
> > measurements with some sort of "write as fast as possible"
> > microbenchmark,
>
> Oracle database with blocksize 16K .. populating the database as fast I
> can
>
> > once the 4 GB of nvram is full, you will be limited by backend
> > performance (FC disks and their interconnect) rather than the host /
> controller bus.
> >
> >  Since, best case, 4 gbit FC can transfer 4 GBytes of data in about 10
>
> > seconds, you will fill it up, even with the backend writing out data
> > as fast as it can, in about 20 seconds. Once the nvram is full, you
> > will only see the backend (e.g. 2 Gbit) rate.
> >
> >  The reason these controller buffers are useful with real applications
>
> > is that they smooth the bursts of writes that real applications tend
> > to generate, thus reducing the latency of those writes and improving
> > performance. They will then "catch up" during periods when few writes
> > are being issued. But a typical microbenchmark that pumps out a steady
>
> > stream of writes won't see this benefit.
> >
> >  Drew Wilson
> >
> >
> >
> >  Asif Iqbal wrote:
> >  On Nov 20, 2007 7:01 AM, Chad Mynhier <[EMAIL PROTECTED]> wrote:
> >
> >
> >  On 11/20/07, Asif Iqbal <[EMAIL PROTECTED]> wrote:
> >
> >
> >  On Nov 19, 2007 1:43 AM, Louwtjie Burger <[EMAIL PROTECTED]>
> wrote:
> >
> >
> >  On Nov 17, 2007 9:40 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
> >
> >
> >  (Including storage-discuss)
> >
> > I have 6 6140s with 96 disks. Out of which 64 of them are Seagate
> > ST3300007FC (300GB - 10000 RPM FC-AL)
> >
> >  Those disks are 2Gb disks, so the tray will operate at 2Gb.
> >
> >
> >  That is still 256MB/s . I am getting about 194MB/s
> >
> >  2Gb fibre channel is going to max out at a data transmission rate
> >
> >  But I am running 4GB fiber channels with 4GB NVRAM on a 6 tray of
> > 300GB FC 10K rpm (2Gb/s) disks
> >
> > So I should get "a lot" more than ~ 200MB/s. Shouldn't I?
> >
> >
> >
> >
> >  around 200MB/s rather than the 256MB/s that you'd expect. Fibre
> > channel uses an 8-bit/10-bit encoding, so it transmits 8-bits of data
> > in 10 bits on the wire. So while 256MB/s is being transmitted on the
> > connection itself, only 200MB/s of that is the data that you're
> > transmitting.
> >
> > Chad Mynhier
> >
> >
> >
> >
> >
> >
> >
>
>
>
> --
> Asif Iqbal
> PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
>
> _______________________________________________
> perf-discuss mailing list
> [EMAIL PROTECTED]
>



-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to