Hi Johansen,

As you requested, here's the output of zpool status:

# sudo zpool status
  pool: r12_data
 state: ONLINE
 scrub: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        r12_data                  ONLINE       0     0     0
          c9t5006016B306005AAd4   ONLINE       0     0     0
          c9t5006016B306005AAd5   ONLINE       0     0     0
          c9t5006016B306005AAd6   ONLINE       0     0     0
          c9t5006016B306005AAd7   ONLINE       0     0     0
          c9t5006016B306005AAd8   ONLINE       0     0     0
          c9t5006016B306005AAd9   ONLINE       0     0     0
          c9t5006016B306005AAd10  ONLINE       0     0     0
          c9t5006016B306005AAd11  ONLINE       0     0     0
          c9t5006016B306005AAd12  ONLINE       0     0     0
          c9t5006016B306005AAd13  ONLINE       0     0     0
          c9t5006016B306005AAd14  ONLINE       0     0     0
          c9t5006016B306005AAd15  ONLINE       0     0     0
          c9t5006016B306005AAd16  ONLINE       0     0     0
          c9t5006016B306005AAd17  ONLINE       0     0     0
          c9t5006016B306005AAd18  ONLINE       0     0     0
          c9t5006016B306005AAd19  ONLINE       0     0     0
          c0t5d0                  ONLINE       0     0     0
          c2t13d0                 ONLINE       0     0     0

errors: No known data errors

  pool: r12_logz
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_logz                 ONLINE       0     0     0
          c9t5006016B306005AAd1  ONLINE       0     0     0

errors: No known data errors

  pool: r12_oApps
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_oApps                ONLINE       0     0     0
          c9t5006016B306005AAd2  ONLINE       0     0     0

errors: No known data errors

  pool: r12_oWork
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_oWork                ONLINE       0     0     0
          c9t5006016B306005AAd3  ONLINE       0     0     0

errors: No known data errors

  pool: r12_product
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_product              ONLINE       0     0     0
          c9t5006016B306005AAd0  ONLINE       0     0     0

errors: No known data errors
#

I'll do also a posting on the zfs-discuss mailing list as you suggested.

I'm not sure if I have picked the correct pool configuration.  Please advise, 
based on the zpool status if you could, please.  You also mentioned a utility 
called filebench. Is it included in the 08/07 release, or the latest Solaris 
release?

Thanks for the help!

As far as the recordsize, I've changed it to 8k as that is what the DBA says 
he's using for Oracle.
----- Original Message ----
From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
To: Bob Friesenhahn <[EMAIL PROTECTED]>
Cc: Grant Lowe <[EMAIL PROTECTED]>; perf-discuss@opensolaris.org
Sent: Wednesday, June 11, 2008 8:31:25 AM
Subject: Re: [perf-discuss] Performance issue

Some of this is going to echo Bob's thoughts and suggestions.

> > Sun E4500 with Solaris 10, 08/07 release.  SAN attached through a 
> > Brocade switch to EMC CX700.  There is one LUN per file system.
> 
> What do you mean by "one LUN per file system"?  Do you mean that the 
> entire pool is mapped to one huge EMC LUN?  File systems are just 
> logical space allocations in a ZFS pool with logical blocksize and 
> other options.  How many spindles are hidden under the big EMC LUN?

It would be useful if you would include your zpool configuration.  The
output of zpool status is generally sufficient.

Configuration of your pool does impact the performance that you'll
observe.  If you have questions about the optimal way to configure a
pool for a certain workload, I'd ask on zfs-discuss.  A lot of storage
and ZFS experts are subscribed to that list.

> Even though you used 8K blocks, in my experience, this sort of 
> sequential test is entirely meaningless for sequential I/O performance 
> analysis.  Even with 128K filesystem blocks, ZFS does sequential I/O 
> quite efficiently when the I/O requests are 8K.

It would be nice to know what kind of workload you're trying to measure.
There's a filesystem benchmark suite called filebench.  It's included in
Solaris now.  It has a bunch of different sequential I/O benchmarks, and
it's a good tool to use for comparisons.

As far as the record size is concerned, 128k is generally optimal.  I
would only consider tuning this if you're doing a substantial amount of
I/O in a block size that isn't 128k.  Setting the recordsize to 8k is
generally advised for database workloads when the database is writing 8k
records.

> > I thought that I would see better performance than this.  I've read 
> > a lot of the blogs, tried tuning this, and still no performance 
> > gains.  Are these speeds normal?  Did I miss something (or 
> > somethings)?  Thanks for any help!
> 
> While I don't claim any particular experience in this area, it seems 
> logical to me that if you are mapping one pool to one huge LUN that 
> you will be reducing ZFS's available transaction rate since it won't 
> be able to schedule the parallel I/Os itself and therefore becomes 
> subject to more of the latency associated with getting data to the 
> array.  Databases normally request that their writes be synced to disk 
> so the latency until the RAID array responds that the data is safe is 
> a major factor.

I would double-check that you've picked the correct pool configuration.
I would also check that you're correctly configured for your array.
There are situations where ZFS tries to flush the cache of the
underlying device.  This is to ensure that the blocks actually make it
to disk.  On some arrays, this isn't necessary and results in a
considerable slowdown.

-j
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to