[Default] On 16 Jun 2017 11:18:42 -0700, in bit.listserv.ibm-main
[email protected] (Jesse 1 Robinson) wrote:

>TGIF. With due respect to the view that Indian (Hindi? Sanskrit?) via Arabic 
>numerals were the progenitor of our modern big-endian bias, I'd like to point 
>out that Roman numerals--remember them you old dudes?--are apparently 
>big-endian. Lord knows who invented that convoluted system, but it persisted 
>in academia and in commerce for centuries. 

As I recall 9 is IX not VIIII and 90 is XC not LXXXX.  Is anyone
energetic enough to verify this.  I am not tonight.

Clark Morris
>
>Friday off topic. I read somewhere that at the time of American independence 
>circa 1776, it was de rigueur for an educated person to be able to do 
>*arithmetic* in Roman numerals. You could not otherwise claim to be properly 
>schooled. A footnote on the whimsy of stodgy education standards. 
>
>.
>.
>J.O.Skip Robinson
>Southern California Edison Company
>Electric Dragon Team Paddler 
>SHARE MVS Program Co-Manager
>323-715-0595 Mobile
>626-543-6132 Office ?=== NEW
>[email protected]
>
>
>-----Original Message-----
>From: IBM Mainframe Discussion List [mailto:[email protected]] On 
>Behalf Of Paul Gilmartin
>Sent: Friday, June 16, 2017 10:56 AM
>To: [email protected]
>Subject: (External):Re: RFE? xlc compile option for C integers to be "Intel 
>compat" or Little-Endian
>
>On Fri, 16 Jun 2017 16:43:38 +0100, David W Noon wrote:
>>...
>>This is not the way computers do arithmetic. Adding, subtracting, etc., 
>>are performed in register-sized chunks (except packed decimal) and the 
>>valid sizes of those registers is determined by architecture.
>> 
>I suspect programmed decimal arithmetic was a major motivation for 
>little-endian.
>
>>In fact, on little-endian systems the numbers are put into big-endian 
>>order when loaded into a register. Consequently, these machines do 
>>arithmetic in big-endian.
>>
>Ummm... really?  I believe IBM computers number bits in a register with
>0 being the most significant bit; non-IBM computers with 0 being the least 
>sighificant bit.  I'd call that a bitwise little-endian.  And it gives an easy 
>summation formula for conversion to unsigned integers.
>
>>As someone who was programming DEC PDP-11s more than 40 years ago, I 
>>can assure everybody that little-endian sucks.
>>
>But do the computers care?  (And which was your first system?  Did you feel 
>profound relief when you discovered the alternative convention?)
>
>IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for 
>sharing numeric data with IBM systems, or big-endian, which was wrong for 
>sharing text data.
>
>For those who remain unaware on a Friday:
>    https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics
>
>-- gil
>
>
>----------------------------------------------------------------------
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to