Franz Haberhauer writes: > > 'ZFS optimizes random writes versus potential sequential reads.' > > This remark focused on the allocation policy during writes, > not the readahead that occurs during reads. > Data that are rewritten randomly but in place in a sequential, > contiguos file (like a preallocated UFS file) are not optimized > for these writes, but for later sequential read accesses. > > Now with ZFS the writes are fast, but the later sequential reads > probably not - readahead may help with this wrt. latency (data may > already be available in the file buffer when the DBMS requests them - > yet the DBMS does readaheads as well). But it will still be random IO > to the disk (higher utilization compared to a sequential pattern). > this is not an issue for a single user, but could be one if there are > many.
Last summer, a little experiment took me by surprise. We had a tight loop issuing single synchroneous I/O to raw. Results where: > > > size: 2048, count 1000, secs 3.96 :random (same cyl ?) > > > size: 2048, count 1000, secs 6.02 :sequential > > > size: 2048, count 1000, secs 6.34 :random (random cyl ?) > > > > > > So it looks like for a 2K write we have in order: > > > > > > write to same cylinder random offset (fastest) > > > write to same cylinder sequential offset (slower) > > > write to random cylinder (slowest) So it kind of makes sense; if I issue a write just after one completes then it's will take a full rotational latency for it to get going. If it's random same cylinder it will be more like half that. Sequential is good _if_ you can keep a pipe of I/Os hitting in stride. But with a pipe of enough concurrent I/Os we can be close to that kind of performance; or at least this has not been proven wrong yet. -r _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss