Wes Felter wrote:
Eric D. Mudama wrote:
On Mon, Jan 4 at 16:43, Wes Felter wrote:
Eric D. Mudama wrote:
I am not convinced that a general purpose CPU, running other software
in parallel, will be able to be timely and responsive enough to
maximize bandwidth in an SSD controller without specialized hardware
support.
Fusion-io would seem to be a counter-example, since it uses a fairly
simple controller (I guess the controller still performs ECC and
maybe XOR) and the driver eats a whole x86 core. The result is very
high performance.
I see what you're saying, but it isn't obvious (to me) how well
they're using all the hardware at hand. 2GB/s of bandwidth over their
PCI-e link and what looks like a TON of NAND, with a nearly-dedicated
x86 core... resuting in 600MB/s or something like that?
Actually it's 600-700MB/s out of a 1+1GB/s slot or 1.5GB/s with two
cards in a 2+2GB/s slot. I suspect that's pretty close to the PCIe
limit. IIRC they have 22 NAND channels at 40MB/s (theoretical peak)
each, which is 880MB/s. I agree that their CPU efficiency is not
great, but cores are supposed to be cheap these days.
Wes Felter
The single Fusion-IO card is a 4x PCI-E 1.1 interface, which means about
1GB/s max throughput. The Fusion-IO Duo is a 8x PCI-E 2.0 interface,
which tops out at about 4GB/s. So, it looks like the single card is at
least a major fraction of the max throughput of the interface, while the
Duo card still has plenty of headroom.
I see the single Fusion-IO card eat about 1/4 the CPU power that a
8Gbit Fibre Channel card HBA does, and roughly the same as a 10Gbit
Ethernet card. So, it's not out of line with comparable throughput
add-in cards. It does need significantly more CPU than a SAS or SCSI
controller, though.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss