> 
> What build/version of Solaris/ZFS are you using?

Solaris 11/06.
bash-3.00# uname -a
SunOS nitrogen 5.10 Generic_118855-33 i86pc i386 i86pc
bash-3.00#

> What block size are you using for writes in bonnie++?
>  I
> ind performance on streaming writes is better w/
> larger writes.

I'm afraid I don't know what block size bonnie++ uses by default - I'm not 
specifying one on the commandline.

> What happens when you run two threads at once?  Does
> write performance
> improve?

If I use raidz, no (overall throughput is actually nearly halved !).  If I use 
"RAID0" (just striped disks, no redundancy) it improves (significantly in some 
cases).

> Does zpool iostat -v 1 report anything interesting
> during the benchmark?
> What about iostat -x 1?  Is one disk significantly
> more busy than the
> others?

Nothing looked what I would consider "interesting".  The load seemed quite 
evenly balanced across all the disks (based on my Mk 1 eyeball).  I've pasted 
some of it below, from halfway through the write cycle of each run:

(The configuration in this particular run is two 8-disk raidzs.  The actual 
configuration I'll probably end up using a 16-disk raidz2, to maximise my disk 
space/reliability ratio.)

Single run:
Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nitrogen         8G           91101  35 76667  42           216524  49 220.0   2
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 14261  99 +++++ +++ 24105  99 14395  99 +++++ +++ 25543  99
nitrogen,8G,,,91101,35,76667,42,,,216524,49,220.0,2,16,14261,99,+++++,+++,24105,99,14395,99,+++++,+++,25543,99

zpool iostat -v 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
internal    4.31G  3.63T      0    838      0  89.0M
  raidz1    2.14G  1.82T      0    423      0  45.1M
    c0t0d0      -      -      0    397      0  6.48M
    c0t1d0      -      -      0    396      0  6.43M
    c0t2d0      -      -      0    396      0  6.47M
    c0t3d0      -      -      0    386      0  6.43M
    c0t4d0      -      -      0    398      0  6.48M
    c0t5d0      -      -      0    397      0  6.43M
    c0t6d0      -      -      0    396      0  6.47M
    c0t7d0      -      -      0    386      0  6.43M
  raidz1    2.17G  1.82T      0    414      0  43.9M
    c1t0d0      -      -      0    392      0  6.32M
    c1t1d0      -      -      0    391      0  6.28M
    c1t2d0      -      -      0    391      0  6.32M
    c1t3d0      -      -      0    381      0  6.27M
    c1t4d0      -      -      0    392      0  6.32M
    c1t5d0      -      -      0    390      0  6.28M
    c1t6d0      -      -      0    386      0  6.32M
    c1t7d0      -      -      0    381      0  6.27M
----------  -----  -----  -----  -----  -----  -----

iostat -x 1
                  extended device statistics
device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
cmdk0        0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
fd0          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd0          0.0  372.0    0.0 5691.5  0.1  0.1    0.6   3  12
sd1          0.0  372.0    0.0 5653.1  0.0  0.1    0.2   0   8
sd2          0.0  374.0    0.0 5692.5  0.1  0.1    0.5   3  12
sd3          0.0  353.1    0.0 5641.6  0.1  0.1    0.6   4  12
sd4          0.0  380.0    0.0 5690.4  0.0  0.1    0.3   1   9
sd5          0.0  380.0    0.0 5652.0  0.0  0.1    0.2   0   9
sd6          0.0  377.0    0.0 5695.4  0.0  0.1    0.3   1   9
sd7          0.0  360.0    0.0 5644.5  0.1  0.1    0.5   2  11
sd8          0.0  381.0    0.0 5920.3  0.0  0.1    0.3   1  10
sd9          0.0  374.0    0.0 5879.9  0.1  0.1    0.5   3  12
sd10         0.0  382.0    0.0 5937.7  0.0  0.1    0.3   1  10
sd11         0.0  356.1    0.0 5888.8  0.1  0.1    0.5   3  12
sd12         0.0  377.0    0.0 5938.2  0.0  0.1    0.3   0   9
sd13         0.0  370.0    0.0 5892.8  0.1  0.1    0.6   4  12
sd14         0.0  376.0    0.0 5932.2  0.0  0.1    0.3   1  10
sd15         0.0  360.0    0.0 5884.3  0.1  0.1    0.5   3  11
sd16         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
st4          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
nfs1         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0


Two simultaneous runs:

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nitrogen         8G           31575  19 15822  19           87277  41 128.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 14616  99 +++++ +++ 21683  99  4141  99 +++++ +++  5630  98
nitrogen,8G,,,31575,19,15822,19,,,87277,41,128.8,1,16,14616,99,+++++,+++,21683,99,4141,99,+++++,+++,5630,98

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nitrogen         8G           29943  18 16116  20           90079  41 125.3   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  3257  96 +++++ +++  2542  80 17372  99 +++++ +++ 24678  99
nitrogen,8G,,,29943,18,16116,20,,,90079,41,125.3,1,16,3257,96,+++++,+++,2542,80,17372,99,+++++,+++,24678,99

zpool iostat -v 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
internal    9.93G  3.63T      0    810      0  75.5M
  raidz1    4.97G  1.81T      0    412      0  37.8M
    c0t0d0      -      -      0    372      0  5.44M
    c0t1d0      -      -      0    370      0  5.40M
    c0t2d0      -      -      0    372      0  5.44M
    c0t3d0      -      -      0    315      0  5.39M
    c0t4d0      -      -      0    370      0  5.44M
    c0t5d0      -      -      0    370      0  5.40M
    c0t6d0      -      -      0    372      0  5.44M
    c0t7d0      -      -      0    349      0  5.39M
  raidz1    4.96G  1.81T      0    397      0  37.8M
    c1t0d0      -      -      0    366      0  5.44M
    c1t1d0      -      -      0    295      0  5.40M
    c1t2d0      -      -      0    369      0  5.44M
    c1t3d0      -      -      0    290      0  5.39M
    c1t4d0      -      -      0    360      0  5.43M
    c1t5d0      -      -      0    300      0  5.39M
    c1t6d0      -      -      0    360      0  5.43M
    c1t7d0      -      -      0    308      0  5.38M
----------  -----  -----  -----  -----  -----  -----

iostat -x 1
                  extended device statistics
device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
cmdk0        0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
fd0          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd0          0.0  384.5    0.0 5635.2  0.0  0.1    0.4   2  11
sd1          0.0  387.5    0.0 5599.1  0.0  0.3    0.8   1  28
sd2          0.0  384.5    0.0 5634.2  0.1  0.1    0.5   4  13
sd3          0.0  322.4    0.0 5581.1  0.2  0.2    1.2   9  15
sd4          0.0  380.5    0.0 5633.2  0.0  0.1    0.2   0   9
sd5          0.0  380.5    0.0 5591.2  0.0  0.1    0.3   1   9
sd6          0.0  382.5    0.0 5630.7  0.0  0.1    0.3   1   9
sd7          0.0  359.5    0.0 5580.1  0.0  0.1    0.3   0   9
sd8          0.0  363.5    0.0 5606.7  0.0  0.1    0.3   1   9
sd9          0.0  291.4    0.0 5567.6  0.3  0.2    1.8  11  18
sd10         0.0  362.5    0.0 5605.7  0.1  0.1    0.7   4  14
sd11         0.0  285.4    0.0 5555.6  0.2  0.2    1.3   7  16
sd12         0.0  358.5    0.0 5603.7  0.0  0.1    0.2   0   8
sd13         0.0  299.4    0.0 5563.1  0.2  0.2    1.4   8  18
sd14         0.0  357.5    0.0 5602.2  0.0  0.1    0.3   1   9
sd15         0.0  308.4    0.0 5555.6  0.2  0.2    1.2   6  18
sd16         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
st4          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
nfs1         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0


> I have a 4x 500GB disk raidz config w/ a 2.6 GHz dual
> core at home;
> this config sustains approx 120 MB/sec on reads and
> writes on single
> or multiple streams.  I'm running build 55; the box
> has a SI controller
> running in PATA compat. mode.
> 
> One of the challenging aspects of performance work on
> these sorts of
> things is separating out drivers vs cpus vs memory
> bandwidth vs disk
> behavior vs intrinsic FS behavior.

Some observations:
* This machine only has 32 bit CPUs.  Could that be limiting performance ?
* A single drive will hit ~60MB/s read and write.  Since these are only 7200rpm 
SATA disks, that's probably all they've got to give.
* A 4-disk (one controller) "RAID0" zpool delivers about 250MB/sec reads, 
95MB/sec writes.
* A 4-disk (two controllers) "RAID0" zpool delivers about 250MB/sec reads, 
95MB/sec writes.
* Two 4-disk (two controllers) "RAID0" zpools delivers about 350MB/sec 
aggregate reads, 210MB/sec writes.
* Three 4-disk "RAID0" zpools delivers about 310MB/sec aggregate reads, 
170MB/sec writes.
* Four 4-disk "RAID0" zpools delivers... an incompleted benchmark.  When the 
"rewrite" part of bonnie++ was running in this particular test, frequently all 
IO to the zpools would stop, and subsequently all the output from 'zpool 
iostat' and 'iostat' would freeze for lengthening periods of time.  Clearly 
some resource was becoming exhausted.  Eventually I just ran out of patience 
and killed them all.
* One 8-disk "RAID0" zpool (single controller) delivers about 400MB/sec reads, 
120MB/sec writes.
* One 8-disk "RAID0" zpool (four disks from each controller) delivers about 
400MB/sec reads, 140MB/sec writes.
* Two 8-disk "RAID0" zpools delivers about 550MB/sec aggregate on reads, 
220MB/sec writes.
* One 16-disk "RAID0" zpool will return 400MB/sec on reads, 135MB/sec on writes.
* I would have thought the top-end performance to be a bit higher (or maybe I'm 
being ambitious ?).  It may be hardware related - although (as long as I'm 
reading the output of 'lspci -vt' from the Linux install right) each controller 
should be on its own 100Mhz, 64 bit PCI-X bus.  This motherboard is several 
years old, however, so it's possible I'm just bumping into the limitations of 
its PCI implementation.  I might try the cards in different slots when I get 
home tonight and see if that makes a difference.
* Writes are a lot slower than reads across the board (I expected this, just 
wondering if the results I'm getting are reasonable).
* Writes seem to be quite a bit slower relative to Linux s/w RAID as well.  Is 
this just an engineering tradeoff ZFS has made to provide benefits in other 
areas ?  What kind of write performance do people get out of those honkin' big 
x4500s ?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to