On Fri, Jan 10, 2025 at 02:46:13AM +0100, Urs Thuermann wrote:
For example, my computers had 5.12 kB,
65.356 kB, 16.777216 MB, 67.108864 MB, 268.435456 MB, 1.073741824 GB,
and 8.589934592 GB of RAM. Perfectly correct, but I prefer to say
they had 5 kiB, 64 kiB, 16 MiB, 64 MiB, 256 MiB, 1 GiB, and 8 GiB of
RAM.
I 100% guarantee that if we abandoned oddball power of 2 SI units that
you'd still be able talk about an 8Gig RAM computer without any
ambiguity when discussing mainstream devices. Just like you can buy a
"23 inch class" monitor that isn't 23 inches. Fun fact: 4k monitors
aren't actually 4k. There are technical reasons and history and on and
on and it doesn't matter.
Block devices like floppy disks, hard disks, SSDs, etc. also have
block sizes which are powers of 2, like 256, 512, and 4096 bytes.
This is not because of checksumming as was suggested in this thread,
but because it makes it easier (e.g. for DMA controllers) to copy
from/to pages of RAM.
So what? Why should users care about such details? We already conceal
most of them in the interest of a better UX. Disk space used to be
reported in 512B blocks because that's the addressable unit of a block
device, and people *hated it*. Files are byte-addressable, ls needs to
output a size in bytes, and people simply weren't interested in doing
the math to convert bytes to blocks depending on context. It doesn't
matter how many arguments you make about the underlying physical
limitations, it's something that users don't need to know and don't want
to know.
Therefore, my floppy disks had exacty 170.75 kiB, 720 kiB, and 1.44
MiB --- or as you would like to put it --- 174.848 kB, 737.280 kB, and
1.47456 MB. So again: Are kiB, MiB, GiB, and TiB really pointless?
Yes. Note that you're talking about really small and really old storage
formats. "Anachronistic" is the word.
And here's the really, really funny fact that undermines your entire
argument: the 1.44M floppy *was not* 1.44MiB. It was 1440kiB, or 1.47MB or
1.41MiB. The cracks in the power of two system were showing all the way
back then. Your argument largely boils down to "I like these numbers
better than those numbers because I remember them fondly from my youth",
and that's not a great basis for a system of measurement.
This is not really confusing, except for people who are too dumb to
understand units and their conversions.
The entire argument for power of two units has always had more than a
hint of elitism behind it. If you know the secret handshake you can be
in the club?
Granted, some confusion came from using k, M, G for the power-of-2 based units.
"Some" is doing a lot of work in that sentence. As you demonstrated just
a bit earlier.
Back in the days
when we had only kilobytes this didn't matter too much since 2**10 is
so close to 1000 (only 2.4% more) that it was just practical to use
uppercase K to mean "a little more than k", i.e. 2**10, but still
speak of kilobytes. When we reached sizes of megabytes, we couldn't
use a "larger than M letter", so M was simply used for both, 10**6 and
2**20, but it was usually clear from the context, what was meant.
So...it became more of a problem the larger the sizes being discussed?
Sounds like it was (arguably) appropriate for tiny devices 50 years ago,
maybe not anymore?
All of this goes back to RAM, which is becoming less and less of a
factor. If the whole world revolved around memory sizes and things with
address lines then maybe I'd accept that measuring everything in powers
of two makes sense. But the world doesn't. And it very much doesn't make
sense to use two different (but confusingly close) scales and switch
(inconsistently) between them depending on context. Computing these days
depends just as much on networking as RAM, and networking has always
used power of 10 units. If I really wanted to fight the good fight I'd
get rid of the byte also, but we're probably stuck with that also.