On Fri, 16 Jun 2017 16:43:38 +0100, David W Noon wrote:
>...
>This is not the way computers do arithmetic. Adding, subtracting, etc.,
>are performed in register-sized chunks (except packed decimal) and the
>valid sizes of those registers is determined by architecture.
> 
I suspect programmed decimal arithmetic was a major motivation for
little-endian.

>In fact, on little-endian systems the numbers are put into big-endian
>order when loaded into a register. Consequently, these machines do
>arithmetic in big-endian.
>
Ummm... really?  I believe IBM computers number bits in a register with
0 being the most significant bit; non-IBM computers with 0 being the
least sighificant bit.  I'd call that a bitwise little-endian.  And it gives an
easy summation formula for conversion to unsigned integers.

>As someone who was programming DEC PDP-11s more than 40 years ago, I can
>assure everybody that little-endian sucks.
>
But do the computers care?  (And which was your first system?  Did you
feel profound relief when you discovered the alternative convention?)

IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for
sharing numeric data with IBM systems, or big-endian, which was wrong
for sharing text data.

For those who remain unaware on a Friday:
    https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to