On Sun, Feb 8, 2009 at 11:12, Tim <t...@tcsac.net> wrote:
> I wouldn't think grabbing 8GB memory would be a big deal after dropping that
> much on the controller??
There being no sense in half measures, I ordered 12GB (i.e., three
kits) of this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820148115
Unfortunately, one stick out of six failed Memtest, so I'll likely run
my tests with a 10GB log volume rather than wait for the manufacturer
to send me a new one.  Not that it really matters; 1GB would be plenty
for this use.  For real use, I'll switch to a 4gb log and put 8gb in
my desktop, but testing with the whole capacity versus a small one
will be interesting.  Perhaps I'll even try a 1gb log + 9gb l2arc
setup.  Anything else that'd be interesting?  Suggestions for what to
benchmark are welcome.

Interestingly, I didn't even think to test the memory out of the box,
until zpool status reported errors.  Then I tested and found the bad
stick.  Checksums win again.

I also ordered and received six 1TB Hitachi drives for a separate pool
so I can work with an empty pool rather than a mostly-full one.

Lastly, here are some results from bonnie++, using only the RAM device
(i.e., zpool create scratch theramdrive):
# time bonnie++ -s 9900 -n 10240 -u will
Using uid:1000, gid:10.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...Expected 10485760 files but only got 10485710
Cleaning up test directory after error.

Huh.  Maybe a bug in bonnie++?  'zpool status' doesn't show anything
wrong.  The readme says "The file creation tests use file names with 7
digits numbers and a random number (from 0 to 12) of random
alpha-numeric characters."  Since this test created 10M files out of a
pool of 62 ** 10 * 10 ** 7 ~= 10 ** 25 possible filenames, one
wouldn't think 50 collisions would happen, but perhaps the random
number generator is skewed somehow.

In any case, here are two separate runs, one with -s 9900 -n 0 and one
with -s 0 -n 4096:
Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
will-fs       9900M 58110  60 106084  17 68095  13 76297  93 154258
11  3221   5
Version 1.03c       ------Sequential Create------ --------Random Create--------
will-fs             -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
               4096  9120  84 10399  25  4600  25  8552  60 35342  71  2602  23


and the second set of output of "zpool iostat -v scratch 10" while in
the "stat files sequential" phase:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
scratch2    1.98G  7.96G  2.63K      0   165M      0
  c6t3d0    1.98G  7.96G  2.63K      0   165M      0
----------  -----  -----  -----  -----  -----  -----

None too shabby, methinks.  I'll post some more detailed results when
I've done more testing.

Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to