On Thu, 22 Oct 2009, Marc Bevand wrote:
Bob Friesenhahn <bfriesen <at> simple.dallas.tx.us> writes:
For random write I/O, caching improves I/O latency not sustained I/O
throughput (which is what random write IOPS usually refer to). So Intel can't
cheat with caching. However they can cheat by benchmarking a brand new drive
instead of an aged one.
With FLASH devices, a sufficiently large write cache can improve
random write I/O. One can imagine that the "wear leveling" logic
could be used to do tricky remapping so that several "random writes"
actually lead to sequential writes to the same FLASH superblock so
only one superblock needs to be updated and the parts of the old
superblocks which would have been overwritten are marked as unused.
This of course requires rather advanced remapping logic at a
finer-grained resolution than the superblock. When erased space
becomes tight (or on a periodic basis), the data in several
sparsely-used superblocks are migrated to a different superblock in a
more compact way (along with requisite logical block remapping) to
reclaim space. It is worth developing such remapping logic since
FLASH erasures and re-writes are so expensive.
They also carefully only use a limited span
of the device, which fits most perfectly with how the device is built.
AFAIK, for the X25-E series, they benchmark random write IOPS on a 100% span.
You may be confusing it with the X25-M series with which they actually clearly
disclose two performance numbers: 350 random write IOPS on 8GB span, and 3.3k
on 100% span. See
http://www.intel.com/cd/channel/reseller/asmo-na/eng/products/nand/tech/425265.htm
You are correct that I interpreted the benchmark scenarios from the
X25-M series documentation. It seems reasonable for the same
manufacturer to use the same benchmark methodology for similar
products. Then again, they are still new at this.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss