On Oct 20, 2009, at 8:23 AM, Robert Dupuy wrote:
A word of caution, be sure not to read a lot into the fact that the
F20 is included in the Exadata Machine.
From what I've heard the flash_cache feature of 11.2.0 Oracle that
was enabled in beta, is not working in the production release, for
anyone except the Exadata 2.
The question is, why did they need to give this machine an unfair
software advantage? Is it because of the poor performance they
found with the F20?
Oracle bought Sun, they have reason to make such moves.
I have been talking to a Sun rep for weeks now, trying to get the
latency specs on this F20 card, with no luck in getting that
revealed so far.
AFAICT, there is no consistent latency measurement in the
industry, yet. With magnetic disks, you can usually get some
sort of average values, which can be useful to the first order.
We do know that for most flash devices read latency is relatively
easy to measure, but write latency can vary by an order of magnitude,
depending on the SSD design and IOP size. Ok, this is a fancy way
of saying YMMV, but in real life, YMMV.
However, you can look at Sun's other products like the F5100, which
are very unimpressive and high latency.
I would not assume this Sun tech is in the same league as a Fusion-
io ioDrive, or a Ramsan-10. They would not confirm whether its a
native PCIe solution, or if the reason it comes on a SAS card, is
because it requires SAS.
So, test, test, test, and don't assume this card is competitive
because it came out this year, I am not sure its even competitive
with last years ioDrive.
+1
I told my sun reseller that I merely needed it to be faster than the
Intel X25-E in terms of latency, and they weren't able to
demonstrate that, at least so far...lots of feet dragging, and I can
only assume they want to sell as much as they can, before the cards
metrics become widely known.
I'd be surprised if anyone could answer such a question while
simultaneously being credible. How many angels can dance on
the tip of a pin? Square dance or ballet? :-) FWIW, Brendan recently
blogged about measuring this at the NFS layer.
http://blogs.sun.com/brendan/entry/hybrid_storage_pool_top_speeds
I think where we stand today, the higher-level systems questions of
redundancy tend to work against builtin cards like the F20. These
sorts of cards have been available in one form or another for more
than 20 years, and yet they still have limited market share -- not
because they are fast, but because the other limitations carry more
weight. If the stars align and redundancy above the block layer gets
more popular, then we might see this sort of functionality implemented
directly on the mobo... at which point we can revisit the notion of file
system. Previous efforts to do this (eg Virident) haven't demonstrated
stellar market movement.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss