Repeated the same tests on my AMD64 dual core 4GB system with 5 HD103SI 1T drives in raidz1 on a Supermicro PCI-E controller, running 8-STABLE.
foghornleghorn# dd if=/dev/zero of=/usr/zerofile.000 bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes transferred in 4.246402 secs (49386563 bytes/sec) foghornleghorn# dd if=/dev/zero of=/usr/zerofile.000 bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes transferred in 3.913826 secs (53583169 bytes/sec) foghornleghorn# dd if=/dev/zero of=/usr/zerofile.000 bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes transferred in 4.436917 secs (47265975 bytes/sec) foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes transferred in 0.377800 secs (555095486 bytes/sec) foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes transferred in 0.140478 secs (1492869742 bytes/sec) foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes transferred in 0.140452 secs (1493143431 bytes/sec) foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 8.117563 secs (258347487 bytes/sec) foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 8.251862 secs (254142882 bytes/sec) foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 8.307188 secs (252450287 bytes/sec) foghornleghorn# dd if=/dev/da0 of=/dev/null bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 18.958791 secs (110616336 bytes/sec) foghornleghorn# dd if=/dev/da0 of=/dev/null bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 18.924833 secs (110814822 bytes/sec) foghornleghorn# dd if=/dev/da0 of=/dev/null bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 18.893001 secs (111001529 bytes/sec) foghornleghorn# dd if=/tank/zerofile.000 of=/dev/null bs=1M 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 5.156406 secs (406708089 bytes/sec) foghornleghorn# dd if=/tank/zerofile.000 of=/dev/null bs=1M 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 5.126920 secs (409047148 bytes/sec) foghornleghorn# dd if=/tank/zerofile.000 of=/dev/null bs=1M 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 5.145461 secs (407573211 bytes/sec) Here are my relevant settings: vfs.zfs.prefetch_disable=0 vfs.zfs.zil_disable="1" Other than that, I'm trusting FreeBSD's default settings, and they seem to be working pretty well. On Sun, Feb 14, 2010 at 3:34 PM, Michael Loftis <mlof...@wgops.com> wrote: > > > --On Sunday, February 14, 2010 5:28 PM +0000 Jonathan Belson < > j...@witchspace.com> wrote: > > Hiya >> >> After reading some earlier threads about zfs performance, I decided to >> test my own server. I found the results rather surprising... >> >> > You really need to test with at least 4GB of data, else you're just testing > caching speeds on writing. Use a test suite like bonnie++ and you'll see > just how poor the ZFS performance is, especially with multiple readers on > the same file, atleast in 8.0. > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" > -- Joshua Boyd JBipNet E-mail: boy...@jbip.net http://www.jbip.net _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"