Hi all,

I'm currently evaluating the possibility of migrating a NFS server
(Linux Centos 5.4 / RHEL 5.4 x64-32) based to a opensolaris box and i'm
seeing some huge cpu usage in the opensolaris box.

The zfs box is a Dell R710 with 2 Quad-Cores (Intel E5506  @ 2.13GHz),
16Gb ram , 2 Sun non-Raid HBA's connected to two J4400 jbods, while the
Linux box is a 2Xeon 3.0Ghz with 8Gb ram, a areca HBA with 512 mb cache,
and both of the servers have a Intel 10gbE card with jumbo frames enabled.

This zfs box has one pool in a raidz2 with multipath enable (to make use
of the 2hbas and 2 j4400), with 20 disks (sata 7.200 rpm seagate
enterprise as supplied by Sun). The raidz2 has 5 vdevs with 4 disks each.
The test is made by mounting in the linux box one nfs share from the zfs
box, and copy around 1.1TB of data , and this data is users's home
directories, so thousands of small files.
During the copy procedure from the linux box to the zfs box the load in
the zfs box is between 8 and 10 while on the linux box it never goes
over 1 .
Could the fact of having a RAIDZ2 configuration be the cause for such a
big load on the zfs box, or maybe am i missing something ?

Thanks for all your time,
Bruno


Here are some more specs from the ZFS box :

r...@zfsbox01:/var/adm# zpool status -v RAIDZ2
  pool: RAIDZ2
 state: ONLINE
 scrub: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        RAIDZ2                     ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c0t5000C5001A101764d0  ONLINE       0     0     0
            c0t5000C5001A315D0Ad0  ONLINE       0     0     0
            c0t5000C5001A10EC6Bd0  ONLINE       0     0     0
            c0t5000C5001A0FFF4Bd0  ONLINE       0     0     0
          raidz2-1                 ONLINE       0     0     0
            c0t5000C50019C0A04Ed0  ONLINE       0     0     0
            c0t5000C5001A0FA028d0  ONLINE       0     0     0
            c0t5000C50019FCF180d0  ONLINE       0     0     0
            c0t5000C5001A11E657d0  ONLINE       0     0     0
          raidz2-2                 ONLINE       0     0     0
            c0t5000C5001A104A30d0  ONLINE       0     0     0
            c0t5000C5001A316841d0  ONLINE       0     0     0
            c0t5000C5001A0FF92Ed0  ONLINE       0     0     0
            c0t5000C50019EB02FDd0  ONLINE       0     0     0
          raidz2-3                 ONLINE       0     0     0
            c0t5000C5001A0FDBDCd0  ONLINE       0     0     0
            c0t5000C5001A0F2197d0  ONLINE       0     0     0
            c0t5000C50019BDBB8Dd0  ONLINE       0     0     0
            c0t5000C5001A3152A0d0  ONLINE       0     0     0
          raidz2-4                 ONLINE       0     0     0
            c0t5000C5001A100DA0d0  ONLINE       0     0     0
            c0t5000C5001A31544Cd0  ONLINE       0     0     0
            c0t5000C50019F03AF6d0  ONLINE       0     0     0
            c0t5000C50019FC3055d0  ONLINE       0     0     0

###############

r...@zfsbox01:~# zpool iostat RAIDZ2 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
RAIDZ2      2.29T  15.8T     43    305  3.03M  14.6M
RAIDZ2      2.29T  15.8T    114    663  12.7M  18.6M
RAIDZ2      2.29T  15.8T    129    595  14.0M  11.2M
RAIDZ2      2.29T  15.8T    881    623  13.0M  10.4M
RAIDZ2      2.29T  15.8T    227    449  8.48M  17.5M
RAIDZ2      2.29T  15.8T     39    498  4.55M  29.1M

#######################################

r...@zfsbox01:~# top -b | grep CPU | head -n1
CPU states: 35.2% idle,  2.2% user, 62.6% kernel,  0.0% iowait,  0.0% swap

r...@zfsbox01:~# mpstat
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0   55   0 16969 18180  102  785   55  127 1779    4   242    1  69  
0  30
  1   70   0 18005 16820    4  926   44  142 1889    6   159    1  65  
0  35
  2   42   0 16659 18091  262  555   53  113 1757   11   250    2  68  
0  31
  3   48   0 18221 17380  246  667   40  122 1929   12   132    1  66  
0  33
  4   38   0 16547 19965 1766  517   48  107 1775   10   264    2  70  
0  29
  5   42   0 18596 19113 1527  595   35  115 1987    6   156    1  69  
0  31
  6   23   0 16284 17921   10 2066   54  109 1763    4   115    1  70  
0  29
  7   32   0 17576 16665    3 2233   39  134 1847    5    90    0  64  
0  35

top -b| grep Memory
Memory: 16G phys mem, 2181M free mem, 8187M total swap, 8187M free swap


Feb 18 11:42:36 zfsbox01 unix: [ID 378719 kern.info] NOTICE: cpu_acpi:
_PSS package evaluation failed for with status 5 for CPU 2.
Feb 18 11:42:36 zfsbox01 unix: [ID 388705 kern.info] NOTICE: cpu_acpi:
error parsing _PSS for CPU 2
 

Feb 18 11:43:12 zfsbox01 ixgbe: [ID 611667 kern.info] NOTICE: ixgbe0:
identify 82598 adapter
Feb 18 11:43:12 zfsbox01 ixgbe: [ID 611667 kern.info] NOTICE: ixgbe0:
Request 16 handles, 2 available
Feb 18 11:43:12 zfsbox01 pcplusmp: [ID 805372 kern.info] pcplusmp:
pciex8086,10c7 (ixgbe) instance 0 irq 0x45 vector 0x66 ioapic 0xff intin
0xff is bound to cpu 3
Feb 18 11:43:12 zfsbox01 pcplusmp: [ID 805372 kern.info] pcplusmp:
pciex8086,10c7 (ixgbe) instance 0 irq 0x46 vector 0x67 ioapic 0xff intin
0xff is bound to cpu 4

Feb 18 11:43:12 zfsbox01 mac: [ID 469746 kern.info] NOTICE: ixgbe0
registered
Feb 18 11:43:12 zfsbox01 ixgbe: [ID 611667 kern.info] NOTICE: ixgbe0:
Intel 10Gb Ethernet, driver version 1.1.4

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to