On programs that allow input and output by specifying computer-base2 powers of K/M/G OR decimal based powers of 10,
If the input units are specified in in powers of 2 then the output should be given in the same units. Example: dd if=/dev/zero of=/dev/null bs=256M count=2 ... So 512MB, total -... but what do I see: 536870912 bytes (537 MB) copied, 0.225718 s, 2.4 GB/s Clearly 256*2 != 537. At the very least this violates the design principle of 'least surprise' and/or 'least astonishment'. The SI suffixes are a pox put on us bye the disk manufacturers because they wanted to pretend to have 2GB or 4GB drives, when they really only have 1.8GB, or 1907MB. Either way, disks are created in powers of 512 (or 4096) byte sectors, , so while you can exactly specify sizes in powers of 1024, you can't do the same with powers of 1000 (where the result mush be some multiple of or 4096 for some new disks). If I compare this to "df", and see my disk taking 2G, then I should be able to xfer it to another 2G disk but this is not the case do to immoral actions on the part of diskmakers. People knew, at the time, that 9600 was a 960 character/second -- it was a phone communication speed where decimal was used, but for storage, units were expressed in multples of 512 (which the power-of-10 prefixes are not). (Yes, I know for official purposes, and where the existing established held sway before the advent of computers, metric-base-10 became understood as power of 10 based, but in computers, there was never confusion until disk manufacturers tried to take advantage of people. Memory does not come in 'kB' mB or gB (kmg=10^(3*{1,2,3}).. it comes in sizes of KB/MB/GB or (KMG=2^10**{1,2,3}). But this isn't about changing all unit everywhere... but maintaining consistency with the units the user used on input (where such can be verified). Reasonable? Or are inconsistent results more reasonable? ;-)