Luke Lonergan wrote: >> Actually, it does seem to work quite >> well when you use a read optimized >> SSD for the L2ARC. In that case, >> "random" read workloads have very >> fast access, once the cache is warm. >> > > One would expect so, yes. But the usefulness of this is limited to the cases > where the entire working set will fit into an SSD cache. >
Not entirely out of the question. SSDs can be purchased today with more than 500 GBytes in a 2.5" form factor. One or more of these would make a dandy L2ARC. http://www.stecinc.com/product/mach8mlc.php > In other words, for random access across a working set larger (by say X%) > than the SSD-backed L2 ARC, the cache is useless. This should asymptotically > approach truth as X grows and experience shows that X=200% is where it's > about 99% true. > > As time passes and SSDs get larger while many OLTP random workloads remain > somewhat constrained in size, this becomes less important. > You can also purchase machines with 2+ TBytes of RAM, which will do nicely for caching most OLTP databases :-) > Modern DB workloads are becoming hybridized, though. A 'mixed workload' > scenario is now common where there are a mix of updated working sets and > indexed access alongside heavy analytical 'update rarely if ever' kind of > workloads. > Agree. We think that the hybrid storage pool architecture will work well for a variety of these workloads, but the proof will be in the pudding. No doubt we'll discover some interesting interactions along the way. Stay tuned... -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss