On Jan 1, 2010, at 6:33 PM, Bob Friesenhahn wrote:

On Fri, 1 Jan 2010, Erik Trimble wrote:

Maybe it's approaching time for vendors to just produce really stupid SSDs: that is, ones that just do wear-leveling, and expose their true page-size info (e.g. for MLC, how many blocks of X size have to be written at once) and that's about it. Let filesystem makers worry about scheduling writes appropriately, doing redundancy, etc.

From the benchmarks, it is clear that the drive interface is already often the bottleneck for these new SSDs. That implies that the current development path is in the wrong direction unless we are willing to accept legacy-sized devices implementing a complex legacy protocol. If the devices remain the same physical size with more storage then we are faced with the same current situation we have with rotating media, with huge media density and relatively slow I/O performance. We do need stupider SSDs which fit in a small form factor, offer considerable bandwidth (e.g. 300MB/second) per device, and use a specialized communication protocol which is not defined by legacy disk drives. This allows more I/O to occur in parallel, for much better I/O rates.

You can already see this affecting the design of high-throughput
storage.  The Sun Storage F1500 Flash Array has 80 SSDs and
uses 64 SAS channels for host connection. Some folks think that
6 Gbps SATA/SAS connections are the Next Great Thing^TM but
that only means you need 32 host connections.  It is quite amazing
to have 1M IOPS and 12.8 GB/s in 1 RU.  Perhaps this is the DAS
of the future?
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to