Hi all,

we are using the following setup as file server:

---
# uname -a
SunOS troubadix 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-280R

# prtconf -D
System Configuration:  Sun Microsystems  sun4u
Memory size: 2048 Megabytes
System Peripherals (Software Nodes):

SUNW,Sun-Fire-280R (driver name: rootnex)
    scsi_vhci, instance #0 (driver name: scsi_vhci)
    packages
        SUNW,builtin-drivers
        deblocker
        disk-label
        terminal-emulator
        obp-tftp
        SUNW,debug
        dropins
        kbd-translator
        ufs-file-system
    chosen
    openprom
        client-services
    options, instance #0 (driver name: options)
    aliases
    memory
    virtual-memory
    SUNW,UltraSPARC-III+
    memory-controller, instance #0 (driver name: mc-us3)
    SUNW,UltraSPARC-III+
    memory-controller, instance #1 (driver name: mc-us3)
    pci, instance #0 (driver name: pcisch)
        ebus, instance #0 (driver name: ebus)
            flashprom
            bbc
            power, instance #0 (driver name: power)
            i2c, instance #0 (driver name: pcf8584)
                dimm-fru, instance #0 (driver name: seeprom)
                dimm-fru, instance #1 (driver name: seeprom)
                dimm-fru, instance #2 (driver name: seeprom)
                dimm-fru, instance #3 (driver name: seeprom)
                nvram, instance #4 (driver name: seeprom)
                idprom
            i2c, instance #1 (driver name: pcf8584)
                cpu-fru, instance #5 (driver name: seeprom)
                temperature, instance #0 (driver name: max1617)
                cpu-fru, instance #6 (driver name: seeprom)
                temperature, instance #1 (driver name: max1617)
                fan-control, instance #0 (driver name: tda8444)
                motherboard-fru, instance #7 (driver name: seeprom)
                ioexp, instance #0 (driver name: pcf8574)
                ioexp, instance #1 (driver name: pcf8574)
                ioexp, instance #2 (driver name: pcf8574)
                fcal-backplane, instance #8 (driver name: seeprom)
                remote-system-console, instance #9 (driver name: seeprom)
                power-distribution-board, instance #10 (driver name: seeprom)
                power-supply, instance #11 (driver name: seeprom)
                power-supply, instance #12 (driver name: seeprom)
                rscrtc
            beep, instance #0 (driver name: bbc_beep)
            rtc, instance #0 (driver name: todds1287)
            gpio, instance #0 (driver name: gpio_87317)
            pmc, instance #0 (driver name: pmc)
            parallel, instance #0 (driver name: ecpp)
            rsc-control, instance #0 (driver name: su)
            rsc-console, instance #1 (driver name: su)
            serial, instance #0 (driver name: se)
        network, instance #0 (driver name: eri)
        usb, instance #0 (driver name: ohci)
        scsi, instance #0 (driver name: glm)
            disk (driver name: sd)
            tape (driver name: st)
            sd, instance #12 (driver name: sd)
 ...
            ses, instance #29 (driver name: ses)
            ses, instance #30 (driver name: ses)
        scsi, instance #1 (driver name: glm)
            disk (driver name: sd)
            tape (driver name: st)
            sd, instance #31 (driver name: sd)
            sd, instance #32 (driver name: sd)
...
            ses, instance #46 (driver name: ses)
            ses, instance #47 (driver name: ses)
        network, instance #0 (driver name: ce)
    pci, instance #1 (driver name: pcisch)
        SUNW,qlc, instance #0 (driver name: qlc)
            fp (driver name: fp)
                disk (driver name: ssd)
            fp, instance #1 (driver name: fp)
                ssd, instance #1 (driver name: ssd)
                ssd, instance #0 (driver name: ssd)
        scsi, instance #0 (driver name: mpt)
            disk (driver name: sd)
            tape (driver name: st)
            sd, instance #0 (driver name: sd)
            sd, instance #1 (driver name: sd)
...
            ses, instance #14 (driver name: ses)
            ses, instance #31 (driver name: ses)
    os-io
    iscsi, instance #0 (driver name: iscsi)
    pseudo, instance #0 (driver name: pseudo)
---

The disks reside in a StoreEdge3320 expansion unit
connected to the machine's SCSI controller card (LSI1030 U320).
We've created a raidz2 pool:

---
# zpool status
  pool: storage_array
 state: ONLINE
 scrub: scrub completed with 0 errors on Wed Dec 12 23:38:36 2007
config:

        NAME         STATE     READ WRITE CKSUM
        storage_array  ONLINE       0     0     0
          raidz2     ONLINE       0     0     0
            c2t8d0   ONLINE       0     0     0
            c2t9d0   ONLINE       0     0     0
            c2t10d0  ONLINE       0     0     0
            c2t11d0  ONLINE       0     0     0
            c2t12d0  ONLINE       0     0     0

errors: No known data errors
---

The throughput when writing from a local disk to the
zpool is around 30MB/s, when writing from a client
machine, the throughput drops to ~9MB/s (NFS mounts
over dedicated gigabit switch).
When copying data to the pool throughput drops every few
seconds to almost zero regardless of the source (NFS or local).

# zpool iostat 1
                  capacity     operations    bandwidth
pool            used  avail   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
...
storage_array   138G   202G      0      0      0      0
storage_array   138G   202G      0     11      0   123K
storage_array   138G   202G      2     30  3.96K  3.01M
storage_array   138G   202G      0     96      0  4.14M
storage_array   138G   202G      0    136      0  4.36M
storage_array   138G   202G      0     73      0  4.09M
storage_array   138G   202G      2     77   254K  9.19M
storage_array   138G   202G      0     64   127K  6.05M
storage_array   138G   202G      0     75      0  8.70M
storage_array   138G   202G      0    101      0  3.98M
storage_array   138G   202G      5    154  2.97K  6.19M
storage_array   138G   202G      0     74      0  8.06M
storage_array   138G   202G      0    121      0  2.77M
storage_array   138G   202G      0     64      0  4.95M
storage_array   138G   202G      0     63      0  7.73M
storage_array   138G   202G      0     75      0  9.41M
storage_array   138G   202G      1    128   235K  4.00M
storage_array   138G   202G      0     97      0  4.16M
storage_array   138G   202G      0     72      0  9.08M
storage_array   138G   202G      0     70      0  8.68M
storage_array   138G   202G      0     70      0  8.79M
storage_array   138G   202G      2    102  13.4K  8.01M
storage_array   138G   202G      0    178      0   599K
storage_array   138G   202G      0     37      0  3.39M
storage_array   138G   202G      0     79      0  9.92M
storage_array   138G   202G      0     72      0  9.10M
storage_array   138G   202G      0     79      0  9.93M
storage_array   138G   202G      0     69      0  8.67M
storage_array   138G   202G      0     76      0  9.53M
storage_array   138G   202G      0    116      0  8.50M
storage_array   138G   202G      0    112      0  2.76M
storage_array   138G   202G      0      0      0      0
storage_array   138G   202G      0     55      0  6.95M
storage_array   138G   202G      0      0      0      0
storage_array   138G   202G      0     12      0  1.61M
storage_array   138G   202G      0     70      0  8.79M
storage_array   138G   202G      0     88      0  11.0M
storage_array   138G   202G      0     79      0  9.90M
...



The performance is slightly disappointing. Does anyone have
a similar setup and can anyone share some figures?
Any pointers to possible improvements are greatly appreciated.


Cheers,
  Frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to