'ZFS optimizes random writes versus potential sequential reads.'

Now I   don't think the current  readahead  code is where we
want   it  to  be   yet but, in   the   same way that enough
concurrent 128K I/O  can saturate a  disk (I sure  hope that
Milkowski's data will   confirm this, otherwise  I'm  dead),
enough  concurrent read  I/O will   do the  same. So It's  a
simple  matter  of  programming to  detect   file sequential
access an issue enough I/Os early enough.


With UFS, we had a simple algorithm and one tunable. Touch 2 
sequential page, read a cluster ahead. Then, don't do any
other I/O until all the data is processed. This is flawed in 
many respect. And it certainly requires large cluster size
to get good I/O throughput because it had stop and go behavior.

With ZFS  (again, prefetch code  being looked upon), I think
we  can manage get good I/O   throughput using 128K, through
enough concurrency and intelligent coding.


-r


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to