Sounds like many of us are in a similar situation.

To clarify my original post. The goal here was to continue with what was a cost effective solution to some of our Storage requirements. I'm looking for hardware that wouldn't cause me to get the run around from the Oracle support folks, finger pointing between vendors, or have lots of grief from an untested combination of parts. If this isn't possible we'll certainly find a another solution. I already know it won't be the 7000 series.

Thank you,
Chris Banal


Marion Hakanson wrote:
jp...@cam.ac.uk said:
I can't speak for this particular situation or solution, but I think in
principle you are wrong.  Networks are fast.  Hard drives are slow.  Put a
10G connection between your storage and your front ends and you'll have  the
bandwidth[1].  Actually if you really were hitting 1000x8Mbits I'd put  2,
but that is just a question of scale.  In a different situation I have  boxes
which peak at around 7 Gb/s down a 10G link (in reality I don't need  that
much because it is all about the IOPS for me).  That is with just  twelve 15k
disks.  Your situation appears to be pretty ideal for storage  hardware, so
perfectly achievable from an appliance.

Depending on usage, I disagree with your bandwidth and latency figures
above.  An X4540, or an X4170 with J4000 JBOD's, has more bandwidth
to its disks than 10Gbit ethernet.  You would need three 10GbE interfaces
between your CPU and the storage appliance to equal the bandwidth of a
single 8-port 3Gb/s SAS HBA (five of them for 6Gb/s SAS).

It's also the case that the Unified Storage platform doesn't have enough
bandwidth to drive more than four 10GbE ports at their full speed:
http://dtrace.org/blogs/brendan/2009/09/22/7410-hardware-update-and-analyzing-t
he-hypertransport/

We have a customer (internal to the university here) that does high
throughput gene sequencing.  They like a server which can hold the large
amounts of data, do a first pass analysis on it, and then serve it up
over the network to a compute cluster for further computation.  Oracle
has nothing in their product line (anymore) to meet that need.  They
ended up ordering an 8U chassis w/40x 2TB drives in it, and are willing
to pay the $2k/yr retail ransom to Oracle to run Solaris (ZFS) on it,
at least for the first year.  Maybe OpenIndiana next year, we'll see.

Bye Oracle....

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to