Hi all.

I'm new to ZFS, and I have just installed my first ZFS pools and file systems.  
My Oracle DBA tells me that he's seeing poor performance and would like to go 
back to VxFS.

Here's my hardware:

Sun E4500 with Solaris 10, 08/07 release.  SAN attached through a Brocade 
switch to EMC CX700.  There is one LUN per file system.  Here are the file 
systems:

# df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0         2.0G   142M   1.8G     8%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   194M   1.2M   192M     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
/dev/md/dsk/d9         3.9G   3.1G   827M    80%    /usr
fd                       0K     0K     0K     0%    /dev/fd
/dev/md/dsk/d12        2.0G   403M   1.5G    21%    /var
swap                   194M   1.6M   192M     1%    /tmp
swap                   192M    32K   192M     1%    /var/run
/dev/md/dsk/d6         2.0G   1.2G   728M    63%    /opt
r12_oApps               78G    55G    23G    71%    /opt/a01/oakwcr12
r12_data/d01            44G    42G   2.0G    96%    /opt/d01/oakwcr12
r12_data/d02            42G    40G   2.1G    95%    /opt/d02/oakwcr12
r12_data/d03            42G    40G   2.1G    95%    /opt/d03/oakwcr12
r12_data/d04            44G    40G   3.7G    92%    /opt/d04/oakwcr12
r12_data/d05            42G    33G   9.1G    79%    /opt/d05/oakwcr12
r12_data/d06            42G    40G   2.4G    95%    /opt/d06/oakwcr12
r12_data/d07            42G    31G    11G    75%    /opt/d07/oakwcr12
r12_data/d08            42G    40G   1.7G    96%    /opt/d08/oakwcr12
r12_data/d09            42G    40G   1.8G    96%    /opt/d09/oakwcr12
r12_data/d10            44G    42G   2.5G    95%    /opt/d10/oakwcr12
r12_data/d11            42G    39G   3.3G    93%    /opt/d11/oakwcr12
r12_data/d12            42G    13G    29G    33%    /opt/d12/oakwcr12
r12_data/d21            42G    37G   5.4G    88%    /opt/d21/oakwcr12
r12_data/d22            42G    40G   1.9G    96%    /opt/d22/oakwcr12
r12_data/d23            42G    40G   2.2G    95%    /opt/d23/oakwcr12
r12_data/d24            42G    40G   2.1G    95%    /opt/d24/oakwcr12
r12_logz                14G    24K    14G     1%    /opt/l01/oakwrc12
r12_product             19G   4.7G    14G    26%    /opt/p01/oakwcr12
r12_oWork               39G    28K    39G     1%    /opt/w01/oakwcr12
#

Here are the record sizes:

# zfs get recordsize
NAME          PROPERTY    VALUE         SOURCE
r12_data      recordsize  8K            local
r12_data/d01  recordsize  8K            local
r12_data/d02  recordsize  8K            local
r12_data/d03  recordsize  8K            local
r12_data/d04  recordsize  8K            local
r12_data/d05  recordsize  8K            local
r12_data/d06  recordsize  8K            local
r12_data/d07  recordsize  8K            local
r12_data/d08  recordsize  8K            local
r12_data/d09  recordsize  8K            local
r12_data/d10  recordsize  8K            local
r12_data/d11  recordsize  8K            local
r12_data/d12  recordsize  8K            local
r12_data/d21  recordsize  8K            local
r12_data/d22  recordsize  8K            local
r12_data/d23  recordsize  8K            local
r12_data/d24  recordsize  8K            local
r12_logz      recordsize  128K          default
r12_oApps     recordsize  128K          default
r12_oWork     recordsize  128K          default
r12_product   recordsize  128K          default
#

Doing some performance tests I see this:


# time dd if=/dev/zero of=test.dbf bs=8k count=1048576
1048576+0 records in
1048576+0 records out

real     6:29.9
user        6.2
sys      3:26.1
#  time dd if=test.dbf of=/dev/null bs=8k
1048576+0 records in
1048576+0 records out

real     3:06.4
user        5.5
sys      1:26.9
#
Here are the pools on the system:

# sudo zpool status
  pool: r12_data
 state: ONLINE
 scrub: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        r12_data                  ONLINE       0     0     0
          c9t5006016B306005AAd4   ONLINE       0     0     0
          c9t5006016B306005AAd5   ONLINE       0     0     0
          c9t5006016B306005AAd6   ONLINE       0     0     0
          c9t5006016B306005AAd7   ONLINE       0     0     0
          c9t5006016B306005AAd8   ONLINE       0     0     0
          c9t5006016B306005AAd9   ONLINE       0     0     0
          c9t5006016B306005AAd10  ONLINE       0     0     0
          c9t5006016B306005AAd11  ONLINE       0     0     0
          c9t5006016B306005AAd12  ONLINE       0     0     0
          c9t5006016B306005AAd13  ONLINE       0     0     0
          c9t5006016B306005AAd14  ONLINE       0     0     0
          c9t5006016B306005AAd15  ONLINE       0     0     0
          c9t5006016B306005AAd16  ONLINE       0     0     0
          c9t5006016B306005AAd17  ONLINE       0     0     0
          c9t5006016B306005AAd18  ONLINE       0     0     0
          c9t5006016B306005AAd19  ONLINE       0     0     0
          c0t5d0                  ONLINE       0     0     0
          c2t13d0                 ONLINE       0     0     0

errors: No known data errors

  pool: r12_logz
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_logz                 ONLINE       0     0     0
          c9t5006016B306005AAd1  ONLINE       0     0     0

errors: No known data errors

  pool: r12_oApps
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_oApps                ONLINE       0     0     0
          c9t5006016B306005AAd2  ONLINE       0     0     0

errors: No known data errors

  pool: r12_oWork
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_oWork                ONLINE       0     0     0
          c9t5006016B306005AAd3  ONLINE       0     0     0

errors: No known data errors

  pool: r12_product
 state: ONLINE
 scrub: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        r12_product              ONLINE       0     0     0
          c9t5006016B306005AAd0  ONLINE       0     0     0

errors: No known data errors
#

I thought that I would see better performance than this.  I've read a lot of 
the blogs, tried tuning this, and still no performance gains.  Are these speeds 
normal?  Did I miss something (or somethings)?  Thanks for any help!
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to