Adam Lindsay wrote:
Bart Smaalders wrote:
Adam Lindsay wrote:
Okay, the way you say it, it sounds like a good thing. I
misunderstood the performance ramifications of COW and ZFS's
opportunistic write locations, and came up with much more pessimistic
guess that it would approach random writes. As it is, I have upper
(number of data spindles) and lower (number of disk sets) bounds to
deal with. I suppose the available caching memory is what controls
the resilience to the demands of random reads?
W/ that many drives (16), if you hit in RAM the reads are not really
random :-), or they span only a tiny fraction of the available disk
space.
Clearly I hadn't thought that comment through. :) I think my mental
model included imagined bottlenecks elsewhere in the system, but I
haven't got to discussing those yet.
Hmmm... that _was_ prob. more opaque than necessary. What I meant was
that you've got something on the order of 5TB or better of disk space;
assuming uniformly distributed reads of data and 4 GB of RAM, the odds
of hitting in the cache is essentially zero wrt performance.
Are you reading and writing the same file at the same time? Your cache
hit rate will be much better then....
Not in the general case. Hmm, but there are some scenarios with
multimedia caching boxes, so that could be interesting to leverage
eventually.
bedankt,
adam
graag gedaan.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss