Ross,
Please refresh your test script from the source. The current script
tells cpio to use 128k blocks and mentions the proper command in its
progress message. I have now updated it to display useful information
about the system being tested, and to dump the pool configuration.
It is really interesting seeing the various posted numbers. This is
as close as it comes to a common benchmark. A sort of sanity check.
What is most interesting to me is the reported performance for those
who paid for really fast storage hardware and are using what should be
really fast storage configurations. The reason why it is interesting
is that there seems to be a hardware-independent cap on maximum read
performance. It seems that ZFS's read algorithm is rate-limiting the
read so that regardless of how nice the hardware is, there is a peak
read limit.
There can be no other explanation as to why an ideal configuration of
"Thumper II" SAS type hardware is neck and neck with my own setup, and
quite similar to another fast system as well. My own setup is
delivering less than 1/2 the performance that I would expect for the
initial read (iozone says it can read 540MB/second from a huge file).
Do the math and see if you think that zfs is giving you the
read performance you expect based on your hardware.
I think that we are encountering several bugs here. We also have a
general read bottleneck.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss