Hi,

Note that these are page cache rates and that if the application pushes harder 
and exposes the supporting device rates there is another world of performance 
to be observed. This is where ZFS gets to be a challenge as the relationship 
between the application level I/O and the pool level is very hard to predict. 
For example the COW may or may not have to read old data for a small I/O update 
operation, and a large portion of the pool vdev capability can be spent on this 
kind of overhead.  Also, on read, if the pattern is random, you may or may not 
receive any benefit from the 32 KB to 128 KB reads on each disk of the pool 
vdev on behalf of a small read, say 8 KB by the application, again lots of 
overhead potential. I am not complaining, ZFS is great, I’m a fan, but you 
definitely have your work cut out for you if you want to predict its ability to 
scale for any given workload.

Cheers,
Dave (the ORtera man)
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to