On 8/16/2010 3:57 PM, Russ Price wrote:
On 08/16/2010 10:35 AM, Freddie Cash wrote:
On Mon, Aug 16, 2010 at 7:13 AM, Mike DeMarco<mikej...@yahoo.com> wrote:
What I would really like to know is why do pci-e raid controller cards cost more than an entire motherboard with processor. Some cards can cost over $1,000 dollars, for what.

Because they include a motherboard and processor.  :)  The high-end
RAID controllers include their own CPUs and RAM for doing all the RAID
stuff in hardware.

The low-end RAID controllers (if you can even really call them RAID
controllers) do all the RAID stuff in software via a driver installed
in the OS, running on the host computer's CPU.

And the ones in the middle have "simple" XOR engines for doing the
RAID.stuff in hardware.



And the irony is that the expensive hardware RAID controllers really aren't a good idea for ZFS. For a ZFS application, you're far better off to use a simple HBA in JBOD mode, and such HBAs can be had in the $100-$200 range.



Yep, though, honestly, the best thing for ZFS would be some sort of enclosure that has a redundant "controller" connection that does *no*RAID or other device manipulation at all, but DOES have a large NVRAM cache. I get this currently by running all my array enclosures either in JBOD mode, or, more likely, as single-disk RAID0 volumes. But I'm overpaying for all that nice RAID controller hardware I'm not using, so it would be nice to just see someone make such an enclosure. Call it a "caching JBOD".

:-)


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to