On Sun, 27 Dec 2009, Tim Cook wrote:
Cmon, saying "all I did was change code and firmware" isn't a valid comparison
at all. Ignoring
that, I'm still referring to multiple streams which create random I/O to the
backend disk.
I do agree with you that this is a problematic scenario. The issue is
with how fast the data arrives. If the data is written quickly, then
quite a lot of data will be written in each transaction group and zfs
can usefully optimize that transaction group. If the data trickles
in, then it is a difficult problem for any general-purpose filesystem
to solve.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss