> 'ZFS optimizes random writes versus potential sequential reads.'
This remark focused on the allocation policy during writes,
not the readahead that occurs during reads.
Data that are rewritten randomly but in place in a sequential,
contiguos file (like a preallocated UFS file) are not optimized
for these writes, but for later sequential read accesses.
Now with ZFS the writes are fast, but the later sequential reads
probably not - readahead may help with this wrt. latency (data may
already be available in the file buffer when the DBMS requests them -
yet the DBMS does readaheads as well). But it will still be random IO
to the disk (higher utilization compared to a sequential pattern).
this is not an issue for a single user, but could be one if there are
many.
- Franz
Roch Bourbonnais - Performance Engineering wrote On 05/12/06 14:49,:
'ZFS optimizes random writes versus potential sequential reads.'
Now I don't think the current readahead code is where we
want it to be yet but, in the same way that enough
concurrent 128K I/O can saturate a disk (I sure hope that
Milkowski's data will confirm this, otherwise I'm dead),
enough concurrent read I/O will do the same. So It's a
simple matter of programming to detect file sequential
access an issue enough I/Os early enough.
With UFS, we had a simple algorithm and one tunable. Touch 2
sequential page, read a cluster ahead. Then, don't do any
other I/O until all the data is processed. This is flawed in
many respect. And it certainly requires large cluster size
to get good I/O throughput because it had stop and go behavior.
With ZFS (again, prefetch code being looked upon), I think
we can manage get good I/O throughput using 128K, through
enough concurrency and intelligent coding.
-r
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss