> On Aug 15, 2024, at 12:46 PM, Peter Ekstrom via cctalk 
> <cctalk@classiccmp.org> wrote:
> 
> Hi to the group,
> 
> I am tinkering with some C-code where I am working on something that can
> process some microcode. The microcode is from a DG MV/10000 machine and
> while working on it, I noticed it is in little-endian. That's simple enough
> to work around but that had me wondering, why do we have big and little
> endianness? What is the benefit of storing the low-order byte first? Or is
> that simply just an arbitrary decision made by some hardware manufacturers?
> 
> I am mostly just curious.
> 
> Thanks,
> Peter / KG4OKG

The short answer is "it's historic and manufacturers have done it in different 
ways".

You might read the original paper on the topic, "On holy wars and a plea for 
peace" by Danny Cohen (IEN-137, 1 april 1980): 
https://www.rfc-editor.org/ien/ien137.txt

And yes, different computers have used different ordering, not just 
characters-in-word ordering but bit position numbering.  For example, very 
confusingly there are computers where the conventional numbering has the lowest 
bit number (0 or 1) assigned to the most significant bit.  The more common 
numbering of 0 for the LSB gives the property that setting bit n in a word 
produces the value 2^n, which is more convenient than, say, 2^(59-n).

        paul

Reply via email to