On 2020-09-23 09:56, Peng Yu wrote:
Hi,
Many people use dd to test disk performance. There is a key option dd,
which I understand what it literally means. But it is not clear how
there performance measured by dd using a specific bs maps to the disk
performance of other I/O bound programs. Could you anybody let me know
the interpretation of bs in terms of predicting the performance of
other I/O bound programs? Thanks.
The bs likely maps to performance like this:
perf (fraction of max)
1.0| ___-------------------------_
| _/
| _/
| /
| |
| /
0||
+----------|------------------------|--
bs A B
A bs of zero is impossible, so we can call that point "no performance".
Ridiculously small values of bs will cause the program to be doing
too many system calls. The larger the bs, the fewer syscalls dd has
to make, so there is some improvement with diminishing returns until
the maximum theoretical performance is reached for that OS, hardware
and approach (read/write loop). Then if bs gets ridiculously large,
so that the buffers don't fit into the on-chip CPU caches, then
there are almost certainly negative returns.
The range of sizes from A to B is probably wide enough, that an
intelligent guess at a good bs size is likely to land in it.