billtodd wrote:
I do want to comment on the observation that "enough concurrent 128K I/O can saturate a disk" - the apparent implication being that one could therefore do no better with larger accesses, an incorrect conclusion. Current disks can stream out 128 KB in 1.5 - 3 ms., while taking 5.5 - 12.5 ms. for the average-seek-plus-partial-rotation required to get to that 128 KB in the first place. Thus on a full drive serial random accesses to 128 KB chunks will yield only about 20% of the drive's streaming capability (by contrast, accessing data using serial random accesses in 4 MB contiguous chunks achieves around 90% of a drive's streaming capability): one can do better on disks that support queuing if one allows queues to form, but this trades significantly increased average operation latency for the increase in throughput (and said increase still falls far short of the 90% utilization one could achieve using 4 MB chunking).

Enough concurrent 0.5 KB I/O can also saturate a disk, after all - but this says little about effective utilization.

I think I can summarize where we are at on this.

This is the classic big-{packet|block|$-line|bikini} versus
small-{packet|block|$-line|bikini} argument.  One size won't fit all.

The jury is still out on what all of this means for any given application.
 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to